what agentic UX can learn from friction-maxxing
Friction-maxxing as a trend represents a reaction to AI, but let's not write it off… what can we learn from it?
Friction-maxxing as a trend represents a reaction to AI, but let’s not write it off… what can we learn from it?
The impulse behind friction-maxxing is worth taking seriously: people want to feel in control, especially when an agent is about to do something they can’t easily undo. That instinct maps directly onto patterns already emerging in agentic UX.
Dry runs let users see what an agent would do before it does anything — a preview of consequences before commitment. This isn’t friction for friction’s sake; it’s informed consent.
Double confirmations before destructive actions are another example. When an agent is about to delete, overwrite, or send something irreversible, a second prompt isn’t annoying — it’s the right amount of pause. The friction is proportional to the stakes.
Friction-maxxing taken to an extreme is just bad UX. But the underlying concern — that automation can move faster than understanding — is a real design problem. The answer isn’t to slow everything down uniformly. It’s to put friction exactly where irreversibility lives.