Prompt Security's Itamar Golan on why generative AI security requires building a category, not a feature
VentureBeat recently sat down (virtually) with Itamar Golan, co-founder and CEO of Prompt Security, to chat through the GenAI security challenges organizations of all sizes face.
We talked about shadow AI sprawl, the strategic decisions that led Golan to pursue building a market-leading platform versus competing on features, and a real-world incident that crystallized why protecting AI applications isn't optional anymore. Golan provided an unvarnished view of the company's mission to empower enterprises to adopt AI securely, and how that vision led to SentinelOne's estimated $250 million acquisition in August 2025.
Golan's path to founding Prompt Security began with academic work on transformer architectures, well before they became foundational to today's large language models. His experience building one of the earliest GenAI-powered security features using GPT-2 and GPT-3 convinced him that LLM-driven applications were creating an entirely new attack surface. He founded Prompt Security in August 2023, raised $23 million across two rounds, built a 50-person team, and achieved a successful exit in under two years.
The timing of our conversation couldn’t be better. VentureBeat analysis shows shadow AI now costs enterprises $4.63 million per breach, 16% above average, yet 97% of breached organizations lack basic AI access controls, according to IBM's 2025 data. VentureBeat estimates that shadow AI apps could double by mid-2026 based on current 5% monthly growth rates. Cyberhaven data reveals 73.8% of ChatGPT workplace accounts are unauthorized, and enterprise AI usage has grown 61x in just 24 months. As Golan told VentureBeat in previous coverage, "We see 50 new AI apps a day, and we've already cataloged over 12,000. Around 40% of these default to training on any data you feed them, meaning your intellectual property can become part of their models."
The following has been edited for clarity and length.
VentureBeat: What made you recognize that GenAI security needed a dedicated company when most enterprises were still figuring out how to deploy their first LLMs? Was there a specific moment, customer conversation, or attack pattern you saw that convinced you this was a fundable, venture-scale opportunity?
Itamar Golan: From an early age, I was drawn to mathematics, data, and the emerging world of artificial intelligence. That curiosity shaped my academic path, culminating in a study on transformer architectures, well before they became foundational to today's large language models. My passion for AI also guided my early career as a data scientist, where my work increasingly intersected with cybersecurity.
Everything accelerated with the release of the first OpenAI API. Around that time, as part of my previous job, I teamed up with Lior Drihem, who would later become my co-founder and Prompt Security's CTO. Together, we built one of the earliest security features powered by generative AI, using GPT-2 and GPT-3 to generate contextual, actionable remediation steps for security alerts. This reduced the time security teams needed to understand and resolve issues.
That experience made it clear that applications powered by GPT-like models were opening an entirely new and vulnerable attack surface. Recognizing this shift, we founded Prompt Security in August 2023 to address these emerging risks. Our goal was to empower organizations to ride this wave of innovation and unleash the potential of AI without it becoming a security and governance nightmare.
Prompt Security became known for prompt injection defense, but you were solving a broader set of GenAI security challenges. Walk me through the full scope of what the platform addressed: data leakage, model governance, compliance, red teaming, whatever else. What capabilities ended up resonating most with customers that may have surprised you?
From the beginning, we designed Prompt Security to cover a broad range of use cases. Focusing solely on employee monitoring or prompt-injection protection for internal AI applications was never enough. To truly give security teams the confidence to adopt AI safely, we needed to protect every touchpoint across the organization, and do it all at runtime.
For many customers, the real turning point was discovering just how many AI tools their employees were already using. Early on, companies often found not just ChatGPT but dozens of unmanaged AI services in active use completely outside IT's visibility. That made shadow AI discovery a critical part of our solution.
Equally important was real-time sensitive-data sanitization. Instead of blocking AI tools outright, we enabled employees to use them safely by automatically removing sensitive information from prompts before it ever reached an external model. It struck the balance organizations needed: strong security without sacrificing productivity. Employees could keep working with AI, while security teams knew that no sensitive data was leaking out.
What surprised many customers was how enabling safe usage — rather than restricting it — drove faster adoption and trust. Once they saw AI as a managed, secure channel instead of a forbidden one, usage exploded responsibly.
You built Prompt Security into a market leader. What were the two to three strategic decisions that actually accelerated your growth? Was it focusing on a specific vertical?
Looking back, the real acceleration didn't come from luck or timing: It came from a few deliberate choices I made early. These choices were uncomfortable, expensive, and slowed us down in the short term, but they created massive leverage over time.
First, I chose to build a category, not a feature. From day one, I refused to position Prompt Security as "just" protection against prompt injection or data leakage, because I saw that as a dead end.
Instead, I framed Prompt as the AI security control layer for the enterprise, the platform that governs how humans, agents, and applications interact with LLMs. That decision was fundamental, allowing us to create a budget instead of fighting for it, sit at the CISO table as a strategic layer rather than a tool, and build platform-level pricing and long-term relevance instead of a narrow point solution. I wasn't trying to win a feature race; I was building a new category.
Second, I chose enterprise complexity before it was comfortable. While most startups avoid complexity until they're forced into it, I did the opposite: I built for enterprise deployment models early, including self-hosted and hybrid; covered real enterprise surfaces like browsers, IDEs, internal tools, MCPs, and agentic workflows; and accepted longer cycles and more complex engineering in exchange for credibility. It wasn't the easiest route, but it gave us something competitors couldn't fake: enterprise readiness before the market even knew it would need it.
Third, I chose depth over logos. Rather than chasing volume or vanity metrics, I went deep with a smaller number of very serious customers, embedding ourselves into how they rolled out AI internally, how they thought about risk, policy, and governance, and how they planned long-term AI adoption. These customers didn't just buy the product: they shaped it. That created a product that reflected enterprise reality, produced proof points that moved boardrooms and not just security teams, and built a level of defensibility that came from entrenchment rather than marketing.
You were educating the market on threats most CISOs hadn't even considered yet. How did your positioning and messaging evolve from year one to the acquisition?
In the early days, we were educating a market that was still trying to understand whether AI adoption extended beyond a few employees using ChatGPT for productivity. Our positioning focused heavily on awareness, showing CISOs that AI usage was already sprawling across their organizations and that this created real, immediate risks they hadn't accounted for.
I wasn't trying to win a feature race; I was building a new category.
As the market matured, our messaging shifted from "this is happening" to "here's how you stay ahead." CISOs now fully recognize the scale of AI sprawl and know that simple URL filtering or basic controls won't suffice. Instead of debating the problem, they're looking for a way to enable safe AI use without the operational burden of tracking every new tool, site, copilot, or AI agent employees discover.
By the time of the acquisition, our positioning centered on being the safe enabler: a solution that delivers visibility, protection, and governance at the speed of AI innovation.
Our research shows that enterprises are struggling to get approvals from senior management to deploy GenAI security tools. How are security departments persuading their C-level executives to move forward?
The most successful CISOs are framing GenAI security as a natural extension of existing data protection mandates, not an experimental budget line. They position it as protecting the same assets, corporate data, IP, and user trust, in a new, rapidly growing channel.
What's the most serious GenAI security incident or near-miss you encountered while building Prompt Security that really drove home how critical these protections are? How did that incident shape your product roadmap or go-to-market approach?
The moment that crystallized everything for me happened with a large, highly regulated company that launched a customer-facing GenAI support agent. This wasn't a sloppy experiment. They had everything the security textbooks recommend: WAF, CSPM, shift-left, regular red teaming, a secure SDLC, the works. On paper, they were doing everything right.
What they didn't fully account for was that the AI agent itself had become a new, exposed attack surface. Within weeks of launch, a non-technical user discovered that by carefully crafting the right conversation flow (not code, not exploits, just natural language) they could prompt-inject the agent into revealing information from other customers' support tickets and internal case summaries. It wasn't a nation-state attacker. It wasn't someone with advanced skills. It was essentially a curious user with time and creativity. And yet, through that single conversational interface, they managed to access some of the most sensitive customer data the company holds.
It was both fascinating and terrifying: realizing how creativity alone could become an exploit vector.
That was the moment I truly understood what GenAI changes about the threat model. AI doesn't just introduce new risks, it democratizes them. It makes systems hackable by people who never had the skill set before, compresses the time it takes to discover exploits, and massively expands the damage radius once something breaks. That incident validated our original approach, and it pushed us to double down on protecting AI applications, not just internal use. We accelerated work around:
• Runtime protection for customer-facing AI apps
• Prompt injection and context manipulation detection
• Cross-tenant data leakage prevention at the model interaction layer
It also reshaped our go-to-market. Instead of only talking about internal AI governance, we began showing security leaders how GenAI turns their customer-facing surfaces into high-risk, high-exposure assets overnight.
What's your role and focus now that you're part of SentinelOne? How has operating inside a larger platform company changed what you're able to build compared to running an independent startup? What got easier, and what got harder?
The focus now is on extending AI security across the entire platform, bringing runtime GenAI protection, visibility, and policy enforcement into the same ecosystem that already secures endpoints, identities, and cloud workloads. The mission hasn't changed; the reach has.
Ultimately, we're building toward a future where AI itself becomes part of the defense fabric: not just something to secure, but something that secures you.
The bigger picture
M&A activity continues to accelerate for GenAI startups that have proven they can scale to enterprise-level security without sacrificing accuracy or speed. Palo Alto Networks paid $700 million for Protect AI. Tenable acquired Apex for $100 million. Cisco bought Robust Intelligence for a reported $500 million. As Golan noted, the companies that survive the next wave of AI-enabled attacks will be those that embedded security into their AI adoption strategy from the beginning.
Post-acquisition, Prompt Security's capabilities will extend across SentinelOne's Singularity Platform, including MCP gateway security between AI applications and more than 13,000 known MCP servers. Prompt Security is also delivering model-agnostic coverage across all major LLM providers, including OpenAI, Anthropic, and Google, as well as self-hosted or on-prem models as part of the company's integration into the Singularity Platform.













