We Solved the Wrong Problem for 30 Years
Things are evolving fast... so do we. This is a reflection on what we have learned about software delivery over the last three decades, and how we are applying those lessons to build a fundamentally different way of building software with AI.
I have been building software for three decades. Waterfall, RUP, XP, Scrum, SAFe, Kanban, "lowercase-a agile," and now whatever we are calling the post-agile thing. I have watched this industry reinvent its process every five years while the underlying problem never changes.
Here is the problem: your engineers spend more time coordinating work than doing work. And every methodology you have adopted has been an increasingly sophisticated way of managing that coordination, rather than asking why it exists in the first place.
Stripe's Developer Coefficient report found that developers spend 42% of their working week on technical debt and maintenance rather than building new capabilities. Research by Gloria Mark at UC Irvine documented that it takes an average of 23 minutes to fully regain focus after a single interruption. In our own work at Quantivex, across 30+ enterprise engagements over the years, we consistently see that roughly 60% of feature delivery time is organizational overhead.
That number has not improved in 30 years. It has gotten worse.
The Ceremony Trap
I remember when Scrum felt revolutionary. After years of waterfall death marches, the idea that we could work in short iterations, get feedback, and adapt was genuinely transformative. It was better. Significantly better.
But Scrum brought its own overhead. Standups became status reporting rituals. Planning sessions became estimation theater. Retrospectives became the same conversation every two weeks. Review gates became the place where problems were discovered too late to fix cheaply. We traded one set of ceremonies for another, and over time the ceremonies calcified into the very bureaucracy they were designed to replace.
SAFe made it worse by layering enterprise coordination on top of team coordination. Now you have ceremonies about ceremonies. PI Planning is a multi-day event where hundreds of people synchronize work that could be synchronized by a system that tracks dependencies.
The uncomfortable truth: every one of these ceremonies exists to compensate for an information architecture failure.
- Standups exist because you cannot see who is working on what.
- Planning exists because you cannot see what work is ready.
- Gates exist because quality is not checked continuously.
- Retrospectives exist because you cannot see your own process patterns.
These are information retrieval problems. We have been solving them with calendar invites.
What AI Actually Changes (and What It Does Not)
Now AI arrives and the industry loses its mind in two opposite directions. One camp wants to "vibe code" everything: throw requirements at an LLM and ship whatever comes out. The other camp wants to bolt AI onto their existing process: AI-powered standups, AI-generated Jira tickets, AI iteration summaries.
Both miss the point entirely.
Vibe coding produces code nobody understands, nobody can maintain, and nobody can verify meets requirements, because there are no explicit requirements. It is the worst of the "move fast and break things" mentality, now with the ability to move faster and break things at scale.
AI-augmented Scrum is putting a turbocharger on a horse cart. You still have the ceremonies, the handoffs, the gates. They are just faster. The structural overhead remains.
What AI actually enables, if you are willing to rethink the information architecture rather than just automating the existing process, is something fundamentally different: you can make coordination ceremonies unnecessary by solving the information problem they were compensating for.
This is not a theoretical claim. This is what we are building.
Agentic Forge: What We Actually Did
At Quantivex, we designed Agentic Forge. It is an AI-native delivery framework that replaces stage-based process (waterfall, agile, whatever) with flow-based delivery.
The core idea is simple, but executing it properly requires respecting the engineering disciplines that actually matter while discarding the process ceremonies that do not.
What we kept:
Explicit decisions with documented rationale. Testable requirements with acceptance criteria. Functional decomposition into independently verifiable slices of behavior. Integration testing that proves the system works. Code review against objective criteria. Governance and auditability. Dependency tracking. All of these are engineering disciplines that produce better software regardless of what process you wrap around them.
What we discarded:
Standups, iteration planning, review gates, estimation poker, velocity tracking as a performance metric, coverage percentages as quality indicators, TDD as a methodology, unit tests that mock everything, SOLID as prescriptive rules, and every ceremony that exists because information is not accessible.
What we replaced them with:
A system where the information is always accessible, structured, queryable, and continuously validated.
An engineer starts their day by asking "what should I work on?" and the system responds with items ranked by business priority, filtered by dependency readiness, with blocked decisions quantified by cost. No standup needed. The information is current because the system maintains it, not because someone reported it yesterday.
When they start a story, the system loads the full context: why this exists, what it must satisfy, what patterns to follow (discovered by reading the existing codebase, not assumed), and what it enables downstream. No Confluence archaeology, no Slack searching, no "ask the person who remembers."
The implementation agent produces a plan that follows existing code patterns, writes integration tests by default (unit tests only for pure logic with no I/O), and explicitly documents what it chose NOT to build and why. This is not vibe coding. Every line traces to a requirement. But it is also not ceremony-driven development. No pattern is applied because it is "best practice." Everything earns its place by solving a real problem.
15+ Agents, 45+ Tools, and an Engineering Philosophy
Behind this sits a workforce of 15+ specialized AI agents across 10 responsibility layers, backed by 45+ precision tools. They cover every practice from product strategy through architecture, implementation, code review, test strategy, technical debt analysis, security review, DevOps, and production operations.
But the tools are not the point. The philosophy is the point.
- Simplification over sophistication. Every abstraction, every pattern, every layer must justify its existence by solving a real problem in this specific codebase. "It's best practice" is never sufficient.
- Integration-first testing. An integration test that hits a real database and exercises a real service layer catches the bugs that actually reach production. Twenty unit tests that mock everything prove your code calls mocks correctly, which nobody cares about at 3am when production is down.
- Principles as diagnostics, not prescriptions. SOLID is useful for identifying problems (e.g., SRP as a diagnostic for a class doing too much), but harmful as prescriptive rules that fragment code into dozens of files.
- Code as source of truth. When onboarding a legacy project, the system reads the codebase directly from the engineer's IDE. 9 deterministic analyzers extract technology decisions, architectural patterns, and data models. Every finding is cited to specific files and lines.
What 30 Years Taught Me About What Actually Matters
Here is what I know after three decades: the practices that produce excellent software have not changed. Clear thinking about what to build and why. Explicit decisions with honest rationale. Requirements you can actually test against. Code that reads like prose. Tests that catch real bugs. Error handling that does not pretend failures will not happen. Observability that tells you what went wrong.
What has changed is that we no longer need humans to be the coordination layer. AI does not replace engineering judgment. It replaces the information infrastructure that was too expensive and too tedious to build by hand. With that infrastructure in place, engineering judgment is applied to engineering problems, not to coordination problems.
That is the shift. Not "AI writes your code." Not "AI runs your standups faster." The shift is: AI makes your information architecture good enough that the ceremonies have no content.
The Unlearning Problem
The hardest part of this is not technical. It is unlearning.
Engineers who grew up with Scrum have internalized its ceremonies as "how software is made." When you remove these ceremonies and replace them with a system where the information flows continuously, it feels wrong at first. Where is the standup? How do I know what is happening?
The answer is: ALWAYS. The information is always available. Planning is continuous. Quality is checked at every state change. You know what is happening because you can ask the system and get an accurate, current answer.
This requires trust in the system, which requires the system to be good enough to deserve that trust. That is what we have spent our engineering effort on: building a system that is comprehensive enough, accurate enough, and rigorous enough that an experienced engineering leader can trust it the way they currently trust their ceremonies.