Qodo Bets On Code Verification As AI Coding Scales Raises 70M

With the surge of artificial intelligence reshaping the software industry, millions—sometimes billions—of lines of code are born each month. But as the output expands, a subtle crisis bubbles up: how do we make sure this ocean of new software actually works—and works safely? Qodo, a startup built in the heart of New York, is placing its bets on this very pain point, convinced that the real frontier for AI in software isn’t churning out more code, but verifying and governing it with newfound rigor.

Recently, Qodo closed a hefty Series B round of $70 million, led by Qumra Capital and joined by an eclectic pool of backers—Maor Ventures, Square Peg, Vine Ventures, Susa Ventures, heavyweights like Peter Welinder of OpenAI, and Clara Shih from Meta. This latest haul brings their total funding up to $120 million—a war chest for a company laser-focused on becoming the trust-machine for AI that writes code.

Today’s enterprises are racing to adopt powerful tools like OpenClaw and Claude Code, dreaming of seamless automation. But many discover, often painfully, that rapid output is only half the battle. Reliable, maintainable, and safe software is the true prize. Code that appears overnight can quickly turn into technical debt or—worse—a hidden vulnerability. Qodo wants to be the trusted gatekeeper at this new frontier, an evaluative layer that ensures AI-written code aligns with each company’s unique sense of acceptable risk, internal rules, and collective memory.

Most rivals in this young field take a narrow lens, scanning for what lines changed or slipped in. Qodo looks wider, treating code as an organism, alive within a history and culture. Its platform scans systems holistically, considering old bug patterns, company preferences, and long-standing practices. The goal is more than catching a misplaced comma; it’s about policing architectural integrity and shielding organizations from unseen chaos unleashed by code-generating AIs.

Qodo is helmed by Itamar Friedman, a founder with a knack for sensing where technology tides are headed. Before launching Qodo in 2022—just on the eve of ChatGPT—Friedman already had a history of steering ahead of the curve. He helped build Visualead, sold it to Alibaba, and ran machine vision at the Chinese giant. But it was his earlier stint at Mellanox (later absorbed by Nvidia) that planted the seed for Qodo. There, grappling with automated hardware validation, he saw a key truth: building and verifying complex systems are two entirely distinct arts, requiring different mindsets and toolkits.

The acceleration of AI’s language capabilities at Alibaba’s Damo Academy only deepened this conviction. By 2021, as pre-GPT-3.5 models appeared on the horizon, Friedman realized that soon a major chunk of global digital output—especially code—would be machine-spun. And so, verification would need to evolve, or else enterprises would be left guessing if their software was ticking time bombs.

Despite rising skepticism—95% of developers don’t fully trust AI-written code, yet less than half rigorously review it before merging—old habits cling tight. Inconsistent scrutiny leaves systems exposed, an uncomfortable gap between fear and discipline.

“The magic of LLMs is undeniable for code generation, but on their own, they’re not enough,” Friedman tells TechCrunch. “Quality in software is a moving target. What’s acceptable in one team might cause friction in another. Only context—organizational memory, priorities, unspoken rules—makes sense of it. Drop a star engineer into a foreign codebase and see how much tribal knowledge they’re missing; the same problem applies to LLMs.”

Industry titans like OpenAI and Anthropic may shape the grand AI story, but so far, their forays into code governance are more isolated features than the comprehensive backbone enterprises crave. Startups play in the margins, but few have cracked the durability or integration needed for widespread corporate trust. Qodo is leaning into its edge by sharpening performance and accuracy—recently topping the Martian Code Review Benchmark, spotting errors that others missed without drowning developers in alerts.

The startup’s latest leap is Qodo 2.0: a multi-agent system that adapts to each company’s meaning of code quality. This is no generic yardstick; the agents learn the quirks, the choices, the silent agreements that define every engineering team.

Big names already trust Qodo—Nvidia, Walmart, Red Hat, Intuit, Texas Instruments, Monday.com—and adoption keeps growing.

Every tech year seems to crown its own icon, Friedman notes: first Copilot, then ChatGPT, now entire task chains directed by AI. “But now we’re pivoting from stateless bots to systems with real memory, evolving from machine intelligence to a sort of artificial wisdom. That’s the era Qodo wants to lead.”

(Boilerplate media event and unrelated paragraph references have been omitted for focus and clarity, per your intent to center on the main news content.)