Epic Games just dropped some absolutely wild numbers that show exactly how toxic online gaming can get. We’re talking 778,495 moderation actions on Fortnite in just six months. That’s over 130,000 actions per month. The scale is honestly unhinged.
The breakdown hits different when you see the specific categories. Cyber harassment led the pack with 365,277 actions. That’s nearly half of all moderation decisions. Hate speech wasn’t far behind at 287,664 cases. These aren’t just minor infractions either – we’re talking about serious stuff that’s making gaming spaces genuinely unsafe for players.
“Epic Games took 778,495+ moderation actions on Fortnite in Jul-Dec 2025 • 365,277 for cyber harassment • 287,664 for hate speech • 54,082 for inappropriate language • 53,894 for spam • 101 suicide-related interventions • 83 grooming actions against predators • 30 CSAM reports reviewed • 22 terrorist content actions For voice chat, reported clips get fed through speech-to-text + an AI/LLM that auto-sanctions if it catches a violation. text chat is scanned 24/7 for stuff like self-harm, threats, and predators going after minors. always on in game chats and anything involving minors. when it flags something, a human reviews it. for CSAM they use PhotoDNA to match against known abuse images and report it to NCMEC. the rest is people manually reporting.” – @HYPEX
But here’s where it gets really heavy. Epic had to intervene in 101 suicide-related situations. Think about that for a second. Over 100 times, their moderation team had to step in because someone was in crisis. They also took 83 actions against predators targeting minors. That’s 83 potential grooming situations. The numbers are lowkey terrifying when you realize what they represent.
The tech behind all this moderation is pretty intense though. Epic’s using AI and speech-to-text to monitor voice chat in real time. When someone reports a voice clip, it gets run through an AI system that can automatically hand out punishments if it catches violations. Text chat gets scanned 24/7, especially anything involving minors or self-harm threats.
For the most serious stuff like CSAM (child sexual abuse material), they’re using PhotoDNA technology to match against databases of known abuse images. When they find something, it goes straight to NCMEC (National Center for Missing and Exploited Children). That’s not gaming company territory anymore – that’s law enforcement level reporting.
What’s wild is how normalized this has all become. Like, we just accept that every major game needs this massive moderation infrastructure. Remember when the biggest concern was someone saying a bad word? Now we’re dealing with grooming, terrorist content, and suicide interventions. Gaming grew up, but not everyone in the community did.
The AI assistance is honestly necessary at this scale. There’s no way human moderators could review nearly 800,000 cases in six months. That would be over 4,000 cases per day. But the fact that we need AI to catch this much toxicity says something about gaming culture that’s hard to ignore.
Some players might think Epic’s being too heavy-handed, but when you see numbers like these, it’s clear they’re not overreacting. The alternative is letting harassment and predators run wild in spaces where kids are playing. Nobody wants that.
This transparency from Epic is actually pretty refreshing. Most companies keep their moderation stats locked down tighter than their source code. But putting these numbers out there shows the real scope of what they’re dealing with. It’s not just a few bad actors – it’s a systemic problem that requires massive resources to address.
The gaming industry as a whole is going to have to grapple with these realities. As games get more social and voice chat becomes standard, the potential for abuse grows exponentially. Every platform is going to need similar systems, and that’s going to cost serious money.
Looking ahead, expect more companies to follow Epic’s lead on transparency. Players deserve to know how their safety is being protected, especially parents whose kids are spending hours in these virtual spaces. The days of “just mute them” as a solution are long over.
We’ll probably see even more AI integration in moderation systems too. The technology is only getting better at detecting nuanced threats and context. But human oversight will always be crucial for the most serious cases.
The real question is whether these massive moderation efforts will actually change gaming culture, or if toxic players will just find new ways to be awful. Epic’s fighting the good fight, but it’s going to take the whole community stepping up to really solve this problem.

