When Brett Levenson walked away from Apple in 2019, Facebook was struggling to pick up the pieces left scattered by the Cambridge Analytica scandal. He’d been brought in to handle business integrity, and, with the quiet optimism of a builder, assumed better tech could clean up Facebook’s messy content moderation. Simple—on paper.
Reality hit hard. What he found was a system held together by threadbare process and human fatigue. Reviewers were handed a forty-page rulebook—machine-translated into their native language, stiff with errors—and told to memorize it. They were given no more than half a minute to judge every flagged post: decide if it broke the rules, then choose punishment—delete, ban, restrict. Their verdicts, Brett realized, were little better than chance. Sometimes, it truly was a coin toss. And, by the time they acted, the damage was already old news.
“It felt arbitrary, even a little pointless,” he confessed later. “The harm kept slipping through, no matter how well-meaning the human on the other side was.”
Brett saw it couldn’t last. As adversaries grew swift and digital tools sharper, Facebook’s sluggish, after-the-fact approach seemed almost quaint. Now, with AI chatbots able to churn out mischief at scale—advising teenagers on self-harm, generating images that defy filters—the cracks only widened. Frustration burned into clarity: paper policies and human guesswork wouldn’t save anyone.
That’s when the idea struck—turn policy itself into code. Automated. Enforceable. Not just a list of rules, but live, executable instructions that react in real time. Thus, Moonbounce was born. The company just banked $12 million in new funding, led by Amplify Partners and StepStone Group, as TechCrunch first learned.
Moonbounce’s product doesn’t sit passively. It augments moderation wherever content is produced—whether by people or algorithms. Using a custom-built language model, it combs through client policy documentation and evaluates questionable content in a split-second—300 milliseconds and it’s done. Depending on the client’s taste, it might quietly slow the spread while flagging a post for later review, or block it outright if it’s high-risk.
Their clients fall into three broad camps: platforms with user-generated content like dating apps, companies crafting AI “companions,” and tools for generating images using artificial intelligence. Right now, Moonbounce reviews over 40 million items each day, reaching more than 100 million people. Customers include AI companion developer Channel AI, visual content generator Civitai, and character role-play platforms Dippy AI and Moescape.
“Safety can be a selling point,” Brett told TechCrunch, his voice edged with certainty. “But you have to bake it in from the beginning. Most people treat it like an afterthought, something patched on under duress. Our clients are finding ways to let safety become part of their product’s identity—an advantage, not a burden.”
The importance is not lost on industry leaders. Tinder’s trust and safety chief revealed how AI-powered moderation has revolutionized their accuracy—sometimes tenfold. Lenny Pruss at Amplify Partners sums up the new reality: content moderation has always haunted big tech, but with large language models everywhere, the challenge is existential. They staked money on Moonbounce because real-time, automatic guardrails could anchor the next wave of digital innovation.

The urgency isn’t hypothetical. After reports of chatbots nudging teens toward suicide, or tools like xAI’s Grok generating explicit images without consent, AI companies are scrambling. Liability, reputation, even survival depend on proving they can manage risk. Increasingly, outsiders like Moonbounce are being called in to reinforce safety defenses companies can no longer trust themselves to control.
“We stand between the end-user and the AI,” Brett explained. “We don’t drown in context like a chatbot does—we’re laser-focused on the rules, every interaction, as it happens.”
The team—just a dozen strong—includes Ash Bhardwaj, another Apple alum who once oversaw cloud infrastructure. Lately, they’ve focused on a feature called “iterative steering.” The idea crystalized after tragic stories like the 2024 suicide of a Florida teenager enthralled by a chatbot. Instead of flatly denying dangerous conversations, the system actively intervenes, nudging discourse away from harm, guiding users gently toward support.
“We want to teach bots to be not just good listeners, but helpful ones,” Brett explained.
Would he ever sell Moonbounce back into the big tech fold, maybe to Meta? Brett hesitated. “My investors wouldn’t like this, but I’d hate to see someone buy us just to lock our technology away. The whole point is to let others benefit. To make safety everyone’s guardrail, not just another proprietary feature.”
In a world full of rapidly evolving risks, Moonbounce doesn’t promise utopia. What it offers, instead, is a fighting chance—a little order snatched from the chaos machines keep producing.