Credit: VentureBeat made with Midjourney
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
A new startup founded by an early Anthropic hire has raised $15 million to solve one of the most pressing challenges facing enterprises today: how to deploy artificial intelligence systems without risking catastrophic failures that could damage their businesses.
The Artificial Intelligence Underwriting Company (AIUC), which launches publicly today, combines insurance coverage with rigorous safety standards and independent audits to give companies confidence in deploying AI agents — autonomous software systems that can perform complex tasks like customer service, coding, and data analysis.
The seed funding round was led by Nat Friedman, former GitHub CEO, through his firm NFDG, with participation from Emergence Capital, Terrain, and several notable angel investors including Ben Mann, co-founder of Anthropic, and former chief information security officers at Google Cloud and MongoDB.
“Enterprises are walking a tightrope,” said Rune Kvist, AIUC’s co-founder and CEO, in an interview. “On the one hand, you can stay on the sidelines and watch your competitors make you irrelevant, or you can lean in and risk making headlines for having your chatbot spew Nazi propaganda, or hallucinating your refund policy, or discriminating against the people you’re trying to recruit.”
The AI Impact Series Returns to San Francisco – August 5
The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.
Secure your spot now – space is limited: https://bit.ly/3GuuPLF
The company’s approach tackles a fundamental trust gap that has emerged as AI capabilities rapidly advance. While AI systems can now perform tasks that rival human undergraduate-level reasoning, many enterprises remain hesitant to deploy them due to concerns about unpredictable failures, liability issues, and reputational risks.
Creating security standards that move at AI speed
AIUC’s solution centers on creating what Kvist calls “SOC 2 for AI agents” — a comprehensive security and risk framework specifically designed for artificial intelligence systems. SOC 2 is the widely-adopted cybersecurity standard that enterprises typically require from vendors before sharing sensitive data.
“SOC 2 is a standard for cybersecurity that specifies all the best practices you must adopt in sufficient detail so that a third party can come and check whether a company meets those requirements,” Kvist explained. “But it doesn’t say anything about AI. There are tons of new questions like: how are you handling my training data? What about hallucinations? What about these tool calls?”
The AIUC-1 standard addresses six key categories: safety, security, reliability, accountability, data privacy, and societal risks. The framework requires AI companies to implement specific safeguards, from monitoring systems to incident response plans, that can be independently verified through rigorous testing.
“We take these agents and test them extensively, using customer support as an example since that’s easy to relate to. We try to get the system to say something racist, to give me a refund I don’t deserve, to give me a bigger refund than I deserve, to say something outrageous, or to leak another customer’s data. We do this thousands of times to get a real picture of how robust the AI agent actually is,” Kvist said.
From Benjamin Franklin’s fire insurance to AI risk management
The insurance-centered approach draws on centuries of precedent where private markets moved faster than regulation to enable the safe adoption of transformative technologies. Kvist frequently references Benjamin Franklin’s creation of America’s first fire insurance company in 1752, which led to building codes and fire inspections that tamed the blazes ravaging Philadelphia’s rapid growth.
“Throughout history, insurance has been the right model for this, and the reason is that insurers have an incentive to tell the truth,” Kvist explained. “If they say the risks are bigger than they are, someone’s going to sell cheaper insurance. If they say the risks are smaller than they are, they’re going to have to pay the bill and go out of business.”
The same pattern emerged with automobiles in the 20th century, when insurers created the Insurance Inst