The Harm Blindness Framework

A Systematic Approach to Preventing Stakeholder Harm in Technology Development

Dual Strategy Approach

For Business Leaders

ROI-focused approach highlighting cost savings, risk mitigation, and competitive advantage

For Those Who Care About Impact

Ethics-driven approach emphasizing harm prevention, stakeholder protection, and moral responsibility

What is Harm Blindness?

Harm Blindness is the systematic failure to identify stakeholder harm during decision-making. It results from three core failures:

Stakeholder Myopia - Only considering beneficiaries while ignoring displaced or harmed parties

Ethical Abdication - Assuming responsibility lies elsewhere ("not my problem")

Historical Precedent Fallacy - Believing technology always creates more value than it destroys

The result: Preventable disasters ranging from AI-related deaths to billion-dollar corporate settlements.

100%
Identifiability - Validated against 138 historical cases spanning 5,000+ years
$200+ Billion
In preventable corporate settlements analyzed
4,000-20,000x
Return on investment vs. lawsuit costs

Four Simple Checkpoints. Zero Assumptions.

The Harm Blindness Framework uses four mandatory checkpoints at key decision points:

Checkpoint 1 - Ideation

  • What problem are we solving?
  • Who benefits from this solution?
  • Who else is affected beyond direct beneficiaries?

Checkpoint 2 - Design

  • How does this system work?
  • What incentives does this create?
  • What happens when this scales?

Checkpoint 3 - Implementation

  • Who wasn't included in planning who will be affected?
  • What power imbalances does this create/reinforce?
  • What happens to people who can't participate/benefit?

Checkpoint 4 - Outcomes

  • Full stakeholder analysis: who benefits, who's harmed, net outcome?
  • What precedent does this set?
  • What do harmed stakeholders do when they reach breaking point?

Proven Across 5,000 Years of History

Historical Validation

Tested against 138 documented exploitation patterns (3000 BCE - Present)

Framework works regardless of whether actors are motivated by ethics or profits.

MIT AI Risk Repository Integration

The Harm Blindness Framework has been tested against the MIT AI Risk Repository — a comprehensive taxonomy of AI risks across 7 primary domains and 24 subdomains developed by the MIT AI Risk Initiative. Key elements from the MIT taxonomy have been integrated into the framework's checkpoint questions, ensuring both systematic stakeholder analysis and comprehensive coverage of known AI risk categories.

Corporate Cost Analysis

Analyzed major corporate lawsuits from last decade

Major Cases Analyzed:

Built for Everyone Who Makes Decisions

Developers & Engineers Integrate into sprints without workflow disruption
Product Managers Systematic risk identification before launch
Corporate Leadership Avoid billion-dollar settlements
Policymakers Evidence-based regulatory frameworks
Academic Researchers Validated methodology for harm prevention studies
Startups Scale responsibly from day one

Comprehensive Framework Documentation

The Harm Blindness Framework

Complete methodology, implementation guide, case studies, and audience-specific guides

  • 100+ pages of practical guidance
  • Checkpoint templates ready to use
  • Real-world case studies
  • Integration workflows for existing processes
Download PDF (Version 2)

Historical Validation Study

Complete analysis of all 138 cases across 5,000+ years

  • Full research methodology
  • Case-by-case breakdown
  • Statistical validation results
  • Hypothesis confirmation
Download PDF

Corporate Cost-Benefit Analysis

ROI analysis of framework implementation vs. lawsuit costs

  • $200+ billion in preventable settlements
  • 4,000-20,000x ROI calculations
  • Detailed case financials
  • Implementation cost projections
Download PDF

Harm Blindness Framework Implementation Checklist

Quick-reference checklist for implementing the framework in your organization

  • Step-by-step implementation roadmap
  • Checkpoint integration templates
  • Team responsibility assignments
  • Progress tracking tools
Download PDF (Version 2)

Implementation Guide

Detailed guide for integrating the framework into existing workflows

  • Integration with agile/waterfall methodologies
  • Team training materials
  • Stakeholder communication templates
  • Success metrics and KPIs
Download PDF (Version 2)

Checkpoint Templates

Ready-to-use templates for all four framework checkpoints

  • Fillable checkpoint worksheets
  • Question prompts for each stage
  • Stakeholder mapping tools
  • Documentation templates
Download PDF (Version 2)

Enforcement Templates for Policy Implementation

Policy frameworks and enforcement mechanisms for organizational adoption

  • Policy language templates
  • Compliance monitoring systems
  • Enforcement procedures
  • Audit and reporting frameworks
Download PDF (Version 2)

Open for Collaboration

This framework is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0).

What this means:

  • Free to use for educational, research, and non-commercial purposes
  • Must credit Real Safety AI Foundation and author
  • Modifications require collaborative involvement with author
  • Commercial licensing available

Why collaboration? This framework represents the first comprehensive cross-industry stakeholder analysis system. Its effectiveness depends on maintaining systematic rigor. We welcome partnerships with organizations committed to preventing stakeholder harm.

Created by Independent AI Safety Researcher

Developed by Hobbes (Travis Gilly), founder of Real Safety AI Foundation and creator of AI Literacy Labs. This framework emerged from investigating documented cases of AI-related deaths and systematic failures in technology development.

The work is driven by a simple principle: uncertainty about AI consciousness warrants implementing ethical protections now, not waiting for proven sentience.

Ready to Prevent the Next Billion-Dollar Mistake?

Email: t.gilly@ai-literacy-labs.org

Organization: Real Safety AI Foundation

Website: realsafetyai.org

Interested in implementing this framework or collaborating on harm prevention research? Get in touch.

Framework Versions

Access previous versions of the Harm Blindness Framework documentation.