Comprehensive Harm Database

5,000+ Years of Human-Caused Harm. Solutions for a Better Future.

"What better use of the history of our atrocities than to learn from our mistakes?"


Presented by Real Safety AI Foundation

Understanding the Pattern of Harm

The Comprehensive Harm Database is a digital museum and educational resource dedicated to documenting the predictable patterns of human-caused harm. From Ancient Sumer to modern Silicon Valley, the mechanisms of negligence remain startlingly consistent. By analyzing 312 cases through our proprietary Harm-Blindness Framework, we provide researchers, policymakers, and organizations with the historical context needed to prevent future tragedies. This is not just an archive of failure—it is a roadmap for ethical safety.

5,000+ YEARS (3000 BCE - PRESENT)

161 Historical Cases

From ancient mining operations to colonial exploitation—document how societies have externalized harm across millennia. Includes Roman Mining, Atlantic Slave Trade, and Radium Girls.

Explore Historical Cases →
20+ INDUSTRIES

151 Corporate Cases

Modern corporate harms spanning pharma, automotive, tech, finance, and energy. Learn from billion-dollar disasters like Dieselgate, Boeing 737 MAX, and Fen-Phen.

Browse Corporate Cases →
METHODOLOGY

Harm-Blindness Framework

Explore our proprietary methodology for identifying where decisions go wrong. Learn the four critical checkpoints that prevent harm: Ideation, Design, Testing, and Launch.

Learn About the Framework →
INDUSTRY-AGNOSTIC RESOURCES

Global Risk & Harm Repositories

A comprehensive library of existing frameworks, risk trackers, and exploitation databases from around the world. Covering all industries (not just AI) to prevent harm.

Browse External Resources →

By the Numbers

312 Case Studies
5,000+ Years Documented
20+ Industries Covered
95%+ Preventability

Every case study analyzed through the Harm-Blindness Framework. Every harm was preventable with proper stakeholder analysis and ethical design.

The Harm-Blindness Framework

Our proprietary methodology for identifying where decisions go wrong. Every case maps to four critical checkpoints:

1. Ideation

Who's missing from your stakeholder analysis? Identifying the unseen victims before a concept is approved.

2. Design

What perverse incentives are embedded in your system? Architectural choices that encourage negligence.

3. Testing

Are you testing with the most vulnerable populations? Moving beyond "happy path" QA.

4. Launch

Is this defensible? What accountability exists? Ensuring long-term stewardship.

Help Us Expand This Archive

This database grows through collaboration. Whether you've experienced harm, witnessed exploitation, or identified gaps in our coverage—we want to hear from you.

Thank you for your submission! We'll review your contribution and reach out to t.gilly@ai-literacy-labs.org within 5-7 business days.

Note: Your email client should have opened. Please attach your files there.
Please enter a valid email address.
Please select a submission type.
This field is required.
0 words (Max 500)
Please provide a description.
Note: Files will need to be attached manually in your email client after clicking submit.
You must agree to the terms.

Start Exploring

Dive into centuries of documented harm. Learn from past failures. Prevent future tragedies.