A Systematic Approach to Preventing Stakeholder Harm in Technology Development
ROI-focused approach highlighting cost savings, risk mitigation, and competitive advantage
Ethics-driven approach emphasizing harm prevention, stakeholder protection, and moral responsibility
Harm Blindness is the systematic failure to identify stakeholder harm during decision-making. It results from three core failures:
Stakeholder Myopia - Only considering beneficiaries while ignoring displaced or harmed parties
Ethical Abdication - Assuming responsibility lies elsewhere ("not my problem")
Historical Precedent Fallacy - Believing technology always creates more value than it destroys
The result: Preventable disasters ranging from AI-related deaths to billion-dollar corporate settlements.
The Harm Blindness Framework uses four mandatory checkpoints at key decision points:
Tested against 138 documented exploitation patterns (3000 BCE - Present)
Framework works regardless of whether actors are motivated by ethics or profits.
The Harm Blindness Framework has been tested against the MIT AI Risk Repository — a comprehensive taxonomy of AI risks across 7 primary domains and 24 subdomains developed by the MIT AI Risk Initiative. Key elements from the MIT taxonomy have been integrated into the framework's checkpoint questions, ensuring both systematic stakeholder analysis and comprehensive coverage of known AI risk categories.
Analyzed major corporate lawsuits from last decade
Major Cases Analyzed:
Complete methodology, implementation guide, case studies, and audience-specific guides
Complete analysis of all 138 cases across 5,000+ years
ROI analysis of framework implementation vs. lawsuit costs
Quick-reference checklist for implementing the framework in your organization
Detailed guide for integrating the framework into existing workflows
Ready-to-use templates for all four framework checkpoints
Policy frameworks and enforcement mechanisms for organizational adoption
This framework is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0).
Why collaboration? This framework represents the first comprehensive cross-industry stakeholder analysis system. Its effectiveness depends on maintaining systematic rigor. We welcome partnerships with organizations committed to preventing stakeholder harm.
Developed by Hobbes (Travis Gilly), founder of Real Safety AI Foundation and creator of AI Literacy Labs. This framework emerged from investigating documented cases of AI-related deaths and systematic failures in technology development.
The work is driven by a simple principle: uncertainty about AI consciousness warrants implementing ethical protections now, not waiting for proven sentience.
Email: t.gilly@ai-literacy-labs.org
Organization: Real Safety AI Foundation
Website: realsafetyai.org
Interested in implementing this framework or collaborating on harm prevention research? Get in touch.
Access previous versions of the Harm Blindness Framework documentation.