Towards Errorless UX
Designing Systems That Safeguard Humans

A living research project tackling preventable user errors, one hypothesis at a time.
Help pioneer a world where design anticipates mistakes, so users don’t suffer them.

Why Errorless UX Isn’t Just Ideal... It’s Essential.

A mistap shouldn’t cost someone their savings, delay critical care, or confuse voters.
70% of critical UX failures stem from predictable human error patterns.
As a UX designer obsessed with resilience, I launched this initiative to bridge the gap between human behavior and digital design
Read the Vision Document →

Abstract

Errors in user interfaces are more than simple usability flaws; they can cascade into monetary loss, data breaches, health emergencies, or the breakdown of trust in essential systems. In high-stakes environments, such as healthcare or financial platforms, preventable errors have implications comparable to misfires in nuclear systems; where design should be a safeguarding strategy, not an after thought. This proposal looks to bridge the gap between design intuition and quantifiable evidence by developing a robust error-prevention framework. Grounded in elementary statistics and UX psychology, this project will identify high-risk design patterns, validate error-prevention interventions through hypothesis testing, and produce a practical, scalable toolkit for designers committed to safeguarding users from critical mistakes.

Rationale

History shows that complex systems rarely fail due to malice; they fail because of poorly designed interactions, misread signals, confusing interfaces, and communication breakdowns. It is thoughtful design, through built-in checks, user-centered safeguards, and clear communication, that has prevented disasters in both technology and daily life.

Today, as digital interfaces mediate nearly every aspect of living, from voting to healthcare to education, we must design with the same level of critical foresight. It’s not enough to make systems easier to use; we must design them to prevent failure before it begins.

While conventional UX research prioritizes error recovery, think undo buttons or error messages, there remains a glaring gap: error prevention. This proposal addresses that need by laying the foundation for an open-source framework to help designers anticipate, eliminate, and mitigate errors through intentional structure, pattern design, and psychological insight.

Objectives

Classify types of user errors across contexts using empirical observation and literature review.
Identify and quantify error-prone UI patterns using comparative usability testing.
Run controlled hypothesis tests on interface variables like, use of signifiers, constraints, confirmation flows, to figure out statistically significant impacts on error rates.
Develop a validated UX error-prevention framework, including design heuristics, pattern libraries, and statistical models.
Publish findings as open-source tools, academic papers, and workshop modules to drive adoption in the broader UX and HCI communities.

Research Questions

Which common design patterns most frequently lead to user errors, and in what contexts?
What psychological or cognitive limitations like, memory, attention, information overload, most contribute to errors in digital environments?
Can elementary hypothesis testing like, t-tests, chi-square, regression, confirm that specific design interventions significantly reduce error rates?
How can we synthesize psychological theory, design principles, and statistical findings into a toolkit that generalist UX designers can use with confidence?

UX Theories Underpinning this Research

Norman's Seven Stages of Action

Helps show where users go wrong in translating goals into actions and evaluating outcoms

Jakob Nielsen's Heuristics

Especially visibility of system status, error prevention, and recognition over recall.

Hick's Law

Too many choices increase cognitive load and chance of making errors.

Fitt's Law

Predicts difficulty of selecting a target, relevant for designing tappable areas and spacing.

Mental Models

Misalignments between user expectations and actual system behavior are a major source of error.

Affordances & Signifiers

Misleading cues or lack of clarity in what something does causes slips and mistakes.

Mapping & Feedback

Bad mapping between action and result leads to confusion; lack of feedback causesrepeated errors.

Hypothesis

This section offers a detailed breakdown of testable hypotheses, interface scenarios, and the statistical methods used for validation. The goal is to design with rigor, communicate with clarity, and deliver results that are both reproducible and actionable. Each hypothesis will be evaluated using real-user data gathered through A/B testing and controlled, scenario-based task assessments.
Each of the following hypotheses examines how people interact with systems, digital or physical, and how psychological or environmental factors influence their decisions, errors, or satisfaction. They are written for a beginner audience, aiming to clarify not just what is being tested, but why it matters and how its effectiveness is measured.
No items found.

Methodology

Research Protocol for Data Collection
To ensure rigor and replicability, a formal research protocol will be applied. Data will be gathered across a range of user interactions using both qualitative and quantitative methods. Participants will complete tasks in controlled environments, either remotely or in-lab, with their behaviors tracked through observation, interaction logs, and performance metrics.
  • Recording
  • Error Logging and Event Tracking
  • Time-on-task and success metrics
  • Post tasks self reporting
User testing sessions will be structured so that participants experience design versions in different orders. This reduces the risk that familiarity with the task influences the results. All collected data will be stripped of identifying information before analysis to protect participant privacy. The study will strictly follow IRB ethical guidelines, including obtaining informed consent from all participants.

Recommended Interfaces & Scenarios to Test

To ensure the findings are generalizable, error-prone design patterns will be tested within realistic and diverse application contexts. Recommended interfaces and their corresponding use-case scenarios include
Each scenario will be mapped directly to one or more hypotheses and error types, enabling causal deductions through structured experimentation.
By integrating statistical tests with these grounded interface scenarios, this project ensures findings are both generalizable and immediately actionable for designers across multiple sectors.

Phases

1. Literature Review

  • Analyze over 100 academic papers, industry case studies, and heuristic guidelines to classify known user error patterns
  • Review NASA, FDA, and NNG publications on interface failure and prevention

2. Comparative Design Testing

  • Create interactive prototypes using Figma & ProtoPie with controlled design variations
  • Recruit a diverse participant group (n=60–100) to perform predefined tasks across variants
  • Record error frequency, task time, and self-reported frustration like NASA-TLX or SUS

3. Statistical Analysis

  • Run t-test, ANOVAs, and chi-square tests to collected data
  • Correlate interface variables with user behavior metrics
  • Use regression analysis to model which combinations of UX principles best predict error reduction

4. Define Framework & Toolkit

  • Build the UX Safeguard Toolkit
  • Make resources accessible to designers via public repository

5. Dissemination

  • Write a publishable academic paper and submit to CHI, UXPA, or IxDA
  • Host a webinar and workshop series to teach findings
  • Create a conference presentation with interactive demos

Significance & Impact

This research has the potential to fundamentally shift how designers think about usability. Instead of reacting to user errors, we’ll preempt them. By combining design craft, statistical evidence, and cognitive theory, this project will deliver a new paradigm of UX practice, one that treats user error not as an inevitability, but a solvable design flaw. In the process, we hope to inspire an industry wide shift from “fail gracefully” to “fail less.”
The toolkit will be used not only in commercial product teams, but also in classrooms, public service interfaces, and nonprofit settings. Designing against disaster shouldn’t be a privilege; it should be the standard.
In a world that increasingly lives inside its interfaces, every interaction is an opportunity to help, or to harm. I believe UX designers must approach their work with the same gravity that engineers once applied to missile silos. This proposal is a blueprint for that responsibility. With the right funding, data, and design, we can help ensure that our interfaces serve users not just functionally, but safely.