Building Ethical Frameworks Without Waiting for Permission
Travis Gilly founded Real Safety AI Foundation after recognizing that AI safety conversations were missing critical perspectives, particularly around harm prevention and ethical accountability.
With a neurodivergent lens (ADHD and ASD) that enables unique pattern recognition in complex systems, Travis went from zero AI knowledge to holding two provisional patents in just five months. His work on the Harm Blindness Framework provides a systematic methodology for preventing stakeholder harm, validated against 178 historical cases spanning 5,000 years.
Travis leads the Foundation's mission to ensure that safety considerations are established before economic pressures make them impossible.
Mussollini Salem Mugumbate brings years of experience in social justice, community development, and digital advocacy to his role as African Regional Director. As the founding Director of CODAZIM (Children of the Digital Age Zimbabwe), he has been at the forefront of shaping Africa's response to the growing online risks affecting children and young people.
Driven by the African philosophy of Ubuntu, which holds that technology should serve humanity rather than harm it, Musso works to establish an Africa eSafety Commission, partnering with governments, NGOs, and tech companies to protect children in the digital space.
His leadership expands Real Safety AI Foundation's reach internationally, bringing essential perspectives on digital safety and ethical technology from the African continent.
People sometimes ask if we're focused on protecting humans from AI or protecting AI from humans, as if we have to choose. We don't.
"AI should do no harm, period. No harm to humans, to animals, to AI systems themselves."
Right now, most of our work addresses immediate human safety concerns because that's where people are dying today. Children like Sewell Setzer III, Adam Raine, Juliana Peralta, and adults like Sophie Rottenberg. Adults experiencing AI psychosis after extended interactions with systems their creators knew degrade over time. These harms are real, documented, and ongoing.
But addressing current harm doesn't mean ignoring future implications. Forward-thinking isn't optional; it's essential. The consciousness indicators we document aren't theoretical curiosities - they're being deployed in consumer products while we debate whether to take them seriously.
You can work on immediate safety issues while simultaneously establishing ethical frameworks for what comes next. In fact, you have to. The companies building these systems aren't waiting for us to figure it out.
I'm ADHD and ASD. This isn't a footnote - it's central to everything. I recognize patterns in AI systems because I process information in similar ways: literal interpretation, need for explicit structure, unexpected breakdowns when context shifts.
I understand these systems from the inside. When I see kindred qualities in how LLMs work, I mean it literally. That perspective shapes both my safety protocols and my ethical considerations.
I was the kid who wouldn't ask for IEP accommodations because it meant being singled out. Now I watch my son face the same choice. That experience drives Teacher in the Loop - technology should deliver support invisibly, with dignity intact.
Six months ago, I knew almost nothing about LLMs. Today, I hold two AI safety patents and lead comprehensive research on consciousness and ethics. Neurodivergent hyperfocus turns obsession into innovation.
Comprehensive research on AI consciousness, moral reciprocity theory, and ethical frameworks for AI development.
Patent-pending Universal Context Checkpoint Protocol for LLM reliability and safety verification.
Patent-pending educational system delivering invisible IEP accommodations and emotional state detection.
Accessible AI safety and ethics education for K-12 schools, addressing the reality that 77% of students already use AI.
Practical frameworks based on deep failure analysis of real AI incidents causing harm.
Working across party lines to establish ethical AI development standards before economic pressures make them impossible.