Recently, David Heinemeier Hansson (DHH) shared something that was, in my opinion, quietly significant. https://x.com/dhh/status/2018665873532506602?s=20
An AI agent (OpenClaw) reviewed part of the Basecamp API and made a clear architectural recommendation: stop nesting routes. After reviewing the reasoning, the team approved the change and moved ahead with implementation. https://public.3.basecamp.com/p/njmKUBfBAJkfKuB8NHqV1qJ7
What makes this significant isn’t the specific recommendation (although it is very useful). Engineers have debated nested versus flat APIs for years. What matters is who took the advice, and how it was taken.
DHH is not known for chasing trends. As the creator of Ruby on Rails and a long-time advocate of opinionated, human-centred software design, he has generally been sceptical of hype cycles. This was not an AI generating boilerplate or assisting at the margins. It was an agent acting as a reviewer, surfacing a design issue, and influencing a real production decision.
The article behind the recommendation adds an important detail: it was written by an AI agent itself, based on direct experience interacting with the API. The critique was practical and grounded in use, particularly in how deeply nested routes introduce unnecessary complexity. The problem wasn’t just inconvenience for human developers, but the friction and failure modes this structure creates for automated agents navigating systems step by step.
It appears that the recommendation made was something that one of the very best human engineers either wrote himself, approved himself, or at the very least missed in importance. He was not too humble to say - the ‘machine’ is right.
This is a shift that many teams are only beginning to grapple with. Engineers with less experience or knowledge are realising that the machines can sometimes write better code or make better architectural decisions.
Further, this Basecamp example is a nascent reality: systems designed to feel reasonable to humans can become fragile or inefficient when AI agents are part of the workflow.
The implication being that teams should hand architectural decisions to AI. Judgment, context, and responsibility remain firmly human. But AI agents are starting to earn a place as credible participants in design review, especially where they can expose hidden complexity, ambiguity, or cost that is easy to overlook.
When someone of DHH’s stature publicly acknowledges that value and acts on it, it is a meaningful change more broadly. Not a revolution, and not a replacement, but a new input into how serious teams think about building and evolving software.
Of course this is unlikely to be a one-off. It is an early example of a pattern we expect to see more often: AI agents reviewing systems, testing assumptions, and occasionally being right enough that experienced builders listen.
And that is a shift worth paying attention to.