Skip to main content

Your AI Wants to Touch Payroll. Kubernetes Knows How.

The engineer who built Azure Kubernetes Service is now Workday's CTO. It's not a hire — it's an architecture signal: container governance is the playbook for AI agents.

Your AI Wants to Touch Payroll. Kubernetes Knows How.

Ricardo Argüello

Ricardo Argüello
Ricardo Argüello

CEO & Founder

Business Strategy 7 min read

On April 2, 2026, Microsoft released the Agent Governance Toolkit, an open-source system for governing autonomous AI agents that deploys as a Kubernetes sidecar container. It covers all 10 OWASP Agentic risks. Instead of probability scores or guardrails that work most of the time, it uses deterministic enforcement at sub-0.1ms latency, running in the same infrastructure pattern that governs containers.

Six days later, Gabe Monroy became CTO of Workday. Monroy spent 25 years in platform infrastructure. He founded a startup in the early ecosystem of Kubernetes (the container orchestration platform originally created by Google in 2014, now the industry-standard open-source project under the CNCF). Microsoft acquired that startup, and Monroy helped build Azure Kubernetes Service (AKS), Microsoft’s managed K8s offering. He then led GKE (Google’s equivalent) and developer products at Google, and ran product and engineering at DigitalOcean. His career has been a series of variations on the same problem: how to let untrusted code do useful things inside systems that need to stay controlled.

The timing matters. The governance architecture Kubernetes built for containers is becoming the blueprint for AI agents operating on enterprise payroll, expenses, benefits, and wire transfers.

Where Monroy actually spent 25 years

Most coverage of this appointment reads like corporate musical chairs. But Monroy’s background tells a more specific story.

Kubernetes had a governance problem in its early days that looked a lot like what AI agents face now. Any container could call any service across the cluster, with no verified identity and no record of what happened. A compromised container could move laterally across an entire production environment, and by the time anyone noticed, the damage was already done.

The solution came through four primitives that the industry now treats as table stakes:

  • RBAC (Role-Based Access Control) defines what each workload can access. A frontend pod has no business touching the production database
  • Admission controllers intercept every action before it executes. If a container requests privileges it shouldn’t have, the request gets rejected before anything changes
  • Network policies define trust boundaries between services. The monitoring stack doesn’t get to talk to the payment service
  • Audit logs provide an immutable record of every action, who requested it, what it touched, and when

The value of these primitives lies in their rigid, mechanical nature. They don’t require an AI model to “understand” any policy. They just enforce it, every time, before any action takes effect.

Monroy helped build this governance layer for containers. Now he’s at a company that needs the same pattern, but for AI agents operating on financial data.

The convergence happening right now

Kubernetes-based agent governance is already running in production. Multiple companies shipped real governance tooling on Kubernetes infrastructure in Q1 2026, aimed squarely at AI agents.

Microsoft’s Agent Governance Toolkit is the most explicit example. The open-source project deploys as a sidecar container alongside AI agents, intercepting every action before execution. It integrates with LangChain, CrewAI, Google ADK, and the OpenAI Agents SDK through each framework’s native extension points. Sub-millisecond latency at p99. Microsoft has stated plans to move the project to a foundation for community governance.

IBM took a different approach with Agentic Networking, extending the Kubernetes Gateway API specifically for governed agent traffic. Meanwhile, Kubescape 4.0 added AI agent scanning as a first-class security feature for Kubernetes workloads, and Tigera published a 2026 outlook documenting how Kubernetes clusters are shifting from hosting traditional cloud-native applications to running agent-based workloads with fundamentally different demands around workload identity, access control, and policy enforcement.

The OWASP Top 10 for Agentic Applications (published December 2025) provides the security framework that ties all of this together. Three of the top four risks it identifies correspond directly to existing Kubernetes governance primitives:

  • Agent Goal Hijacking maps to admission controllers: validate what the agent intends to do before it does it
  • Excessive Tool Use maps to RBAC: agents don’t abuse tools because they’re malicious, but because they have more permissions than they need
  • Excessive Trust Delegation maps to network policies: restrict which agents can communicate with which systems

With Microsoft, IBM, OWASP, and several open-source projects adopting this architecture in the same quarter, it’s quickly becoming an industry standard rather than an experiment.

What Workday is building with that architecture

Workday is already testing this pattern at a scale that most agent startups can’t touch. Their FY2026 numbers tell the story: $9.55 billion in annual revenue, with more than 65% of the Fortune 500 running payroll and financials through the platform, 75 million users under contract, and 1.7 billion AI actions executed across the ecosystem last fiscal year.

The $1.1 billion Sana acquisition in September 2025 was the bet. Sana’s agents now automate HR and finance tasks inside the Workday ecosystem, orchestrating across Gmail, Salesforce, and Slack. Early adopters report recruiter capacity up 54% and FP&A efficiency up 49%. Workday has 14 additional agents planned for 2026, plus a new “Agent System of Record” product to track what every AI agent does across the organization.

The agents are impressive. An agent that can query every employee’s compensation, approve expense reports, modify headcount plans, and trigger wire transfers has enormous upside. As Aakash Gupta noted, while the upside is massive, “the attack surface is terrifying.” Each of those capabilities doubles as an attack vector when the agent has more permissions than it should, or when someone finds a way to manipulate its instructions. Which helps explain why Workday went looking for the person who already solved a version of this problem, for containers.

The governance framework your agents need

Whether or not you run Workday, the underlying architecture problem is the same. If you have AI agents operating in your enterprise (or you’re planning to deploy them), four things need to be in place. All four come from the Kubernetes governance playbook.

Trust boundaries. Each agent should operate within a defined scope. An agent that queries compensation data shouldn’t be able to modify wire transfers, and an agent that handles expenses shouldn’t see performance reviews. The distance between a minor incident and a financial disaster is usually one missing boundary.

Least-privilege access. Reading payroll and writing payroll are fundamentally different permissions. If your agent has write access when it only needs read, you’re repeating the mistake Kubernetes fixed with RBAC over a decade ago.

Immutable audit trails. Every agent action gets logged: who triggered it, what data it accessed, what changed, and when. If your organization already has shadow AI running without auditing, adding agents with financial access but no logging just compounds the risk you already have.

Rollback capability. When an agent approves something wrong (a misclassified expense, an incorrect headcount change, a duplicate transfer), you need to revert it immediately, the same way Kubernetes rolls back a failed deployment to its previous known-good state.

If you already run AI agents in enterprise workflows, it’s worth asking how many of these four primitives are actually in place today.

Why Gartner says 40% won’t make it

Gartner predicts that over 40% of agentic AI projects will be canceled before the end of 2027, citing escalating costs, unclear business value, and inadequate risk controls. Analyst Anushree Verma was blunt: most agentic projects right now are early-stage experiments driven by hype and frequently misapplied.

Getting an AI agent to run reliably in production without breaking customer trust, violating compliance, or creating legal exposure takes exactly the kind of governance infrastructure that Kubernetes has been refining for containers over the past decade.

Monroy tamed container chaos at Microsoft and Google. Workday is betting he can do the same for AI agents. The architectural patterns are identical, but they’re now protecting financial and payroll data for 75 million people rather than routing web traffic.

At IQ Source, we help companies build the governance architecture their AI agents need: trust boundaries, granular permissions, audit trails, and rollback capability. If you’re deploying agents against sensitive enterprise data, we should talk.

Frequently Asked Questions

AI agents Kubernetes governance Workday payroll enterprise security OWASP

Related Articles

Finance AI: why LLMs still hallucinate in production
Business Strategy
· 7 min read

Finance AI: why LLMs still hallucinate in production

OpenAI formally proved in 2025 that LLM hallucinations are mathematically inevitable. Here's what that means for building finance AI that CFOs will sign.

AI governance AI architecture finance AI