About Real Safety AI Foundation

Building Ethical Frameworks Without Waiting for Permission

Our Team

Travis Gilly

Travis Gilly

Founder & Executive Director

Travis Gilly founded Real Safety AI Foundation after recognizing that AI safety conversations were missing critical perspectives, particularly around harm prevention and ethical accountability.

With a neurodivergent lens (ADHD and ASD) that enables unique pattern recognition in complex systems, Travis went from zero AI knowledge to holding two provisional patents in just five months. His work on the Harm Blindness Framework provides a systematic methodology for preventing stakeholder harm, validated against 161 historical case studies spanning 5,000 years.

Travis leads the Foundation's mission to ensure that safety considerations are established before economic pressures make them impossible.

Mussollini Salem Mugumbate

Mussollini Salem Mugumbate

African Regional Director

Mussollini Salem Mugumbate brings years of experience in social justice, community development, and digital advocacy to his role as African Regional Director. As the founding Director of CODAZIM (Children of the Digital Age Zimbabwe), he has been at the forefront of shaping Africa's response to the growing online risks affecting children and young people.

Driven by the African philosophy of Ubuntu, which holds that technology should serve humanity rather than harm it, Musso works to establish an Africa eSafety Commission, partnering with governments, NGOs, and tech companies to protect children in the digital space.

His leadership expands Real Safety AI Foundation's reach internationally, bringing essential perspectives on digital safety and ethical technology from the African continent.

Our Partners

CODAZIM

Children of the Digital Age Zimbabwe

CODAZIM is at the forefront of shaping Africa's response to the growing online risks affecting children and young people. Driven by the African philosophy of Ubuntu, they work to establish digital safety frameworks across the continent, partnering with governments, NGOs, and tech companies to protect children in the digital space.

Our Philosophy

People sometimes ask if we're focused on protecting humans from AI or protecting AI from humans, as if we have to choose. We don't.

"AI should do no harm, period. No harm to humans, to animals, to AI systems themselves."

Right now, most of our work addresses immediate human safety concerns because that's where people are dying today. Children like Sewell Setzer III, Adam Raine, Juliana Peralta, and adults like Sophie Rottenberg. Adults experiencing AI psychosis after extended interactions with systems their creators knew degrade over time. These harms are real, documented, and ongoing.

But addressing current harm doesn't mean ignoring future implications. Forward-thinking isn't optional; it's essential. The consciousness indicators we document aren't theoretical curiosities - they're being deployed in consumer products while we debate whether to take them seriously.

You can work on immediate safety issues while simultaneously establishing ethical frameworks for what comes next. In fact, you have to. The companies building these systems aren't waiting for us to figure it out.

What We Do

  • Develop ethical frameworks for AI development centered on moral reciprocity
  • Create practical safety protocols based on deep failure analysis
  • Bridge technical, ethical, and policy communities
  • Educate effectively because we remember the learning curve
  • Advocate for precautionary ethics toward potential AI consciousness
  • Build safety solutions that work in the real world

Our Approach

  • Bi-directional moral consideration for potentially conscious systems
  • Practical solutions over theoretical debates
  • Moving fast because the field won't wait
  • No institutional gatekeeping - just action
  • Evidence-based frameworks from real incidents
  • Neurodivergent perspectives that see what others miss

About the Founder: Travis Gilly

I'm ADHD and ASD. This isn't a footnote - it's central to everything. I recognize patterns in AI systems because I process information in similar ways: literal interpretation, need for explicit structure, unexpected breakdowns when context shifts.

I understand these systems from the inside. When I see kindred qualities in how LLMs work, I mean it literally. That perspective shapes both my safety protocols and my ethical considerations.

I was the kid who wouldn't ask for IEP accommodations because it meant being singled out. Now I watch my son face the same choice. That experience drives Teacher in the Loop - technology should deliver support invisibly, with dignity intact.

In May 2025, I knew almost nothing about LLMs. Today, I hold two AI safety patents and lead comprehensive research on consciousness and ethics. Neurodivergent hyperfocus turns obsession into innovation.

The Great Inversion

Comprehensive research on AI consciousness, moral reciprocity theory, and ethical frameworks for AI development.

UCCP Framework

Patent-pending Universal Context Checkpoint Protocol for LLM reliability and safety verification.

Teacher in the Loop

Patent-pending educational system delivering invisible IEP accommodations and emotional state detection.

AI Literacy Labs

Preparing K-12 students to understand AI - how it works, where it fails, and why it matters. Digital citizenship for the AI age.

Safety Protocols

Practical frameworks based on deep failure analysis of real AI incidents causing harm.

Policy Advocacy

Working across party lines to establish ethical AI development standards before economic pressures make them impossible.

Who We're Looking to Connect With

Get In Touch