Digital systems are designed to support people. The intuitive assumption is that automation makes processes more efficient, reduces errors, and saves time. Yet users systematically bypass security prompts, ignore warning messages, and disable protective features. The question is: Why do people consciously override system defaults? What psychological mechanisms drive this behavior, and what does the evidence tell us?
Studies
The Polymorphic Warnings Experiment
Serge Egelman and his team at Carnegie Mellon University conducted a groundbreaking study on browser security warnings in 2008. They observed 409 Firefox users over several weeks as they encountered real SSL certificate warnings while browsing. The researchers tested two variants: the standard warning, which appeared identical for every certificate issue, and 'polymorphic' warnings that varied in appearance. The sobering result: with the standard warning, 87% of users simply clicked 'Continue' after the third exposure without reading the text. The polymorphic warnings improved attention, reducing this to 42%. The most striking finding: even during a potentially dangerous man-in-the-middle attack, most users ignored the warning—through habituation, it had become a meaningless obstacle requiring only a click.
The Medical Alert Override Study
Jonathan Nebeker and colleagues from the Salt Lake City VA Medical Center investigated a critical problem in hospital systems in 2005. They analyzed 2,872 automated medication alerts that a Clinical Decision Support System sent to physicians over six months—warnings about dangerous drug interactions, allergies, and overdoses. The researchers documented every single response. The shocking result: physicians overrode and ignored 91.6% of all alerts. Of 2,872 warnings, only 238 led to a behavior change. Particularly alarming: even for critical drug-allergy alerts, the override rate was 87%. Physicians had learned that most warnings were false alarms—so they systematically ignored the important ones as well. The system had become effectively useless due to too many irrelevant alerts.
Principle
Which principle for Customer Experience Design can be derived from this? The core principle is simple: minimize security prompts to only what's absolutely necessary—each additional barrier increases the risk of systematic workarounds. In customer experience terms, every warning, confirmation, or security measure must deliver genuine, user-recognizable value. This becomes especially critical with repeated interactions: what initially seems like reasonable security transforms into a frustrating obstacle with frequent use. This principle works best for systems with regular users, while one-time or infrequent interactions can accommodate more security layers. The following guidelines show how to apply this principle in practice.
Guidelines
Show critical warnings only
Eliminate all security prompts that aren't immediately critical. Every warning that can be ignored 90% of the time trains users to dismiss the important 10% as well. Implement a severity classification: only genuine dangers receive warnings. Everything else should be logged in the background or presented as passive information. The paradox: fewer warnings lead to greater security because the remaining ones are actually taken seriously.
Polymorphic Security Notices
When security prompts are necessary, vary their appearance, position, and wording. Identical warnings become automatic clicks after the third exposure. Polymorphic warnings disrupt this automation—they demand conscious processing. Specifically: For critical confirmations, alternate button positions (swap Yes/No placement), employ different visual patterns, and modify the text. This prevents users from operating on autopilot.
Show concrete consequences
Abstract warnings like "This could be unsafe" are routinely ignored. Concrete consequences such as "Your credit card information could be stolen" or "You will lose your insurance coverage" capture attention. Even more effective: demonstrate the immediate impact. For example, instead of labeling a password as "Weak password," specify "With this password, someone could access your account in 2 minutes." The more concrete and immediate the consequence, the lower the override rate.
Build in intelligent friction
When an action is critical, make the override intentionally harder—but only in the right places. For example, for dangerous actions, require users to type "CONFIRM" in a text field rather than simply clicking a button. For accessing sensitive data, implement two-factor authentication. The key is applying this friction only to truly critical actions. If every click requires such verification, you'll train users to override automatically again. Intelligent friction means placing rare but effective barriers at the few points where they genuinely provide protection.
Egelman, S., Cranor, L. F. & Hong, J. (2008). You've been warned: An empirical study of the effectiveness of web browser phishing warnings. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1065-1074
Nebeker, J. R., Hoffman, J. M., Weir, C. R., Bennett, C. L. & Hurdle, J. F. (2005). High rates of adverse drug events in a highly computerized hospital. Archives of Internal Medicine, 165(12), 1414-1420