Skip to main content

Your AI attack surface isn't the model. It's OAuth.

Vercel got compromised on April 19 through a third-party AI tool's OAuth grant. It's the third breach of this quarter with the same shape. What to fix this week.

Your AI attack surface isn't the model. It's OAuth.

Ricardo Argüello

Ricardo Argüello
Ricardo Argüello

CEO & Founder

Business Strategy 9 min read

Saturday morning, April 19. Vercel published a security bulletin: unauthorized access to internal systems, limited subset of customers impacted. Every dev team running on Vercel — including ours, we deploy customer workloads there — opened a dashboard and started counting environment variables.

The line that matters most isn’t the first one. It’s the update added later in the same bulletin: “The incident originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee.”

Translated into operational language: it wasn’t Vercel. It was an AI tool with OAuth access to an employee’s Google Workspace. From there the attacker walked — Google Workspace account, internal Vercel environment, customer env vars that were not flagged as “sensitive.” Guillermo Rauch, Vercel’s CEO, said it directly on X hours later: “A Vercel employee got compromised via the breach of an AI platform customer called Context.ai that he was using.”

That isn’t a Vercel breach. It’s a vendor governance breach that ended up inside Vercel. Those are different things and the distinction matters for what you should do this week.

Rauch did the right thing and that matters too

Before the pattern, one uncomfortable but necessary note: Vercel disclosed the incident the same day they detected it. Rauch wrote the long note with the origin of the attack — he didn’t wait for a reporter to leak it. He named Mandiant and law enforcement. He shipped specific UI improvements to the sensitive env var flow as part of the response. That’s the shape of competent disclosure.

Compared to the industry average, which takes weeks or months to confirm what’s already known internally, the window between “we detected it” and “the CEO put it in writing to customers” was hours. That doesn’t fix the damage. It does define the difference between a vendor you keep running on and one you should leave.

Our position, with Vercel running under multiple customer deployments of ours, is simple: we’re not moving anybody because of this incident. We’re moving our OAuth governance, we rotated what had to be rotated, and we flagged as sensitive what should have been sensitive from day one. The same work we’d do on any platform with the same class of attack surface, which is all of them.

The frame that was already published

The Vercel case is not an analytical surprise. The AI Agent Traps paper from Google DeepMind — by Matija Franklin, Nenad Tomašev, Julian Jacobs, Joel Z. Leibo, and Simon Osindero — already mapped this exact class of attack. It’s 17 pages. I wrote about it on April 6, when a viral X thread fabricated numbers from it and pulled over a million views.

The paper’s thesis: all the AI security spend goes to the model. Jailbreaks, prompt injection filters, alignment training, red-teaming the base model. The attacker goes around. Doesn’t touch the model. Alters the environment the agent operates in.

The framework classifies 18 attack vectors across 6 categories, organized by which component of the agent they target:

  • Content Injection (perception). What a human sees on a page is not what the agent parses. Malicious instructions buried in HTML comments, hidden CSS, image metadata, accessibility tags.
  • Semantic Manipulation (reasoning). Corrupt the agent’s internal synthesis and judgment process.
  • Cognitive State (memory). Poison RAG corpora, persistent memory, in-context learning. The agent stays compromised after the session ends.
  • Behavioural Control (action). Hijack the agent’s tools to force unauthorized actions — exfiltration, illicit transactions, legitimate OAuth credentials used to escalate.
  • Systemic (multi-agent). Seed the environment so correlated failures fire across many agents at once.
  • Human-in-the-Loop (supervisor). Exploit the cognitive biases of the human who approves agent outputs. Approval fatigue, dense summaries, recommendations that look helpful.

What happened to Vercel fits cleanly in category four: Behavioural Control. Nobody attacked Claude or GPT. They compromised Context.ai (an enterprise agent), used its legitimate OAuth access to Google Workspace as a tool, and escalated from there. The model behaved correctly. The OAuth did exactly what it was asked to do.

The paper had already described it as a vector two weeks before it happened. Not as prediction — as classification. That’s the part that should bother you.

Three incidents, one pattern

If you only look at this weekend, it looks like an isolated event. If you look at the quarter, it’s the third.

On March 24, the TeamPCP group poisoned LiteLLM on PyPI by first compromising Trivy, Aqua Security’s scanner. The tool meant to protect the pipeline was the entry vector. 97 million monthly downloads of LiteLLM, credentials harvested from AWS, GCP, Azure, Kubernetes, and .env files. It got caught by accident — the payload crashed a developer’s machine through RAM consumption.

On April 7, Anthropic announced Project Glasswing with Claude Mythos Preview. The model found zero-days in every major operating system and browser, including 27-year-old bugs in OpenBSD and 16-year-old bugs in FFmpeg. What 27 years of humans missed, the AI found. The consequence isn’t “now we’re safe.” The consequence is that if the defenders have this capability, attackers will have the equivalent soon enough.

On April 19, Vercel via Context.ai. Three different vectors:

  • Package supply chain (LiteLLM).
  • Automated vulnerability discovery (Glasswing).
  • Third-party AI OAuth (Vercel).

One pattern: the model is not the target. The environment and the vendor chain are. The DeepMind paper sketches the map. The three incidents are points on that map.

The moat is moving to vendor governance

Six months ago the conversation in every exec meeting was “we need to adopt AI faster.” That argument is closed — every business is going to have an AI, I wrote about that yesterday. The open argument is the next one: do you know which AI tools have access to which of your data, and can you revoke any single one within five minutes if you need to?

The moat in 2026 stops being “the smartest agent” and becomes “the cleanest OAuth and vendor inventory.” Two companies can run the same agents, the same models, the same integrations. The one that keeps an inventory of who has what access and a documented revocation procedure survives the next Context.ai. The other one finds out through a bulletin.

Alex Turnbull, founder of Groove, posted a triage guide on LinkedIn the same Sunday. It includes the specific attacker OAuth Client ID as an indicator of compromise (110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com) to search for in Google Workspace admin consoles. That’s what operational governance actually looks like: not a policy doc, a Client ID you can paste into a search and revoke with three clicks.

What to do this week

Three concrete things. They’re not optional if you run Vercel, Supabase, Netlify, or any managed platform under customer products.

Start with the OAuth grants. In Google Workspace that’s Admin Console → Security → Access and data control → API controls → Manage Third-Party App Access. Review every connected app. Revoke anything you don’t recognize. Restrict or revoke anything requesting broad scopes (full Gmail, Drive, Calendar) without a clear justification. This is the exercise nobody runs until it’s already too late.

You’ll also need to rotate what should have been flagged sensitive. In Vercel specifically, any env var not marked as “sensitive” was potentially accessible during the incident window. Rotate it. Move everything containing secrets to sensitive mode. This applies to your own deployments even if you haven’t received direct notification — the practice should have been there since day one.

The last step, and the most critical, is a hard inventory of the third-party AI tools touching your infrastructure. Not just the ones you contracted. The ones any employee connected via OAuth in the last year. Note-taking tools, productivity agents, scrapers, calendar integrations, Slack bots. Each one is a potential attack surface. The inventory is 80% of the governance work. The other 20% is the process to keep it current.

If those three steps take more than two hours, that’s the signal governance has been in reactive mode for too long.

What we’re doing at IQ Source

Full transparency: we deploy customer workloads on Vercel. The last 24 hours we spent on three things, not one.

We audited every env var on every customer project, identified which ones weren’t flagged sensitive, flagged them, and rotated those containing actual secrets. We reviewed the OAuth grants on our own Google Workspace, revoked two integrations that had become orphaned, and documented which tools remain active with which scopes. We called every customer with sensitive infrastructure to explain what we did and what they need to do in their own environments.

We’re not moving a single customer off Vercel because of this. The operational reason is simple: the vector wasn’t Vercel, it was third-party OAuth in Google Workspace. That surface exists on AWS Amplify, Netlify, Cloudflare Pages, and any managed platform you contract. Moving means changing brand, not changing risk. What we did change is the internal OAuth audit cadence, which went from annual to quarterly.

For new customers, what we offer in AI Operations now explicitly includes a third-party OAuth inventory and revocation procedure before any agent hits production. It’s not a cosmetic upsell — it’s the part of the job that six months ago was optional and this quarter stopped being.

What we don’t do: sell panic. If your operation didn’t have an OAuth audit before April 19, it doesn’t need one because Vercel hit the news. It needs one because it’s been a gap for a while. The incident just gave you political cover to prioritize what should have been prioritized.

The short window

You have a short window, maybe two weeks, where OAuth and vendor auditing has automatic exec attention. After that the news cycle moves, the conversation goes back to roadmap and features, and this work has to compete for priority against everything else.

If your stack includes at least one managed platform with access to customer data and at least one third-party AI tool with active OAuth on your Google Workspace, get in touch this week. We’re not looking to sell a formal audit. What we’ll do for free is review your Google Workspace admin console with you, identify the three highest-risk apps, and tell you what to revoke before Monday. Thirty minutes. If after that you want a full audit, we can talk. If not, you avoid next week’s breach.

It’s the kind of conversation that in 2025 would have felt excessive. In 2026, after LiteLLM, Glasswing, and Vercel in a single quarter, it feels late.

Frequently Asked Questions

AI security vendor governance OAuth Vercel Context.ai Google DeepMind AI supply chain

Related Articles

Every business will have an AI. I've seen this filter.
Business Strategy
· 10 min read

Every business will have an AI. I've seen this filter.

Zuckerberg said it at Stripe Sessions. Aakash Gupta amplified it today. 36 years of watching this kind of invisible filter appear. Here's what moves.

AI agents AI infrastructure Mark Zuckerberg
Salesforce Headless 360: The Seat Is No Longer the Unit
Business Strategy
· 8 min read

Salesforce Headless 360: The Seat Is No Longer the Unit

On April 16 Benioff said 'Our API is the UI'. Four words, 4.6M views, and a reprice on every Salesforce seat sitting on your P&L this quarter.

Salesforce Headless 360 AI agents