On December 10, I’m joining a bunch of cool people on a LeadDev panel to talk about guardrails and AI use. Register to join us!
As we were doing the pre-panel discussion, I started thinking about the metaphor we were using. Guardrails are not uniquely American, but they are very American. We see them so much that they’re effectively invisible. The time I think most about guardrails is when I’m watching European cycling, where people going 70km/hr descend things that look like this:

People drive cars on that! And RVs, even. It’s amazing to the American mind.
I think the difference is that Americans are used to speed. Our interstate highway system, and even our arterial city stroads are designed for speed. They’re wide, smooth, have huge sightlines, and guardrails.
Guardrails have two purposes, and we tend to think of them as the same thing, but they’re not. The mountain guardrails that prevent you from plummeting down a cliff are there to protect you, because wrecking your car is safer and better than skidding or flipping hundreds of feet off the road. The highway guardrail is for the benefit of other people. All those cable median guardrails exist because it is better for you to run into an object than another car going the opposite direction.
However. Our guardrails are designed for last century’s cars. EVs are significantly heavier than their equivalent internal combustion peers, and the center of mass is lower. An EV pickup will blow right through a guardrail that would have stopped a conventional pickup. The guardrails are not prepared for the inertia of these new vehicles. They are going to stay in motion.
This is very much how I feel about AI. We need to take into account both the speed we want to go, and the other factors, like how heavy/critical our application is, and how much we can see/predict while we’re driving it.
If we want to go fast, freeway-fast, we need to build for it:
- Wide, smooth surfaces – AI implementation has to be part of our platform, and managed to make it consistent and predictable across an enterprise
- Good sight lines – We need to understand the scope of what we’re doing, and what it will affect. Almost every “AI disaster” story I can think of is a result of getting the scope wrong.
- Heavy-duty guardrails – Organizations need to make sure that AI does not allow anyone to plunge off cliffs or create head-on crashes. That is going to take a combination of social and technical solutions.
Conclusion
Vibe coding is a real thing, but it’s the equivalent of “motoring” in an open car on roads built for horse-drawn vehicles. It’s a necessary step, but we need to change the whole infrastructure to be able to take advantage of AI speed. I’m reading Forsgren and Noda’s new book, Frictionless, and it makes the same point – AI can only move as quickly as the infrastructure supporting it. That infrastructure has to include the safety for developers, teams, and organizations to move quickly without endangering themselves and others.

