Your Next Executive Assistant Is an AI System
Ricardo Argüello — March 14, 2026
CEO & Founder
General summary
Dave Killeen, CPO at Pendo ($2.6B), built a Claude Code system that monitors 45 enterprise deals without attending pipeline reviews. He combines Granola transcripts, Clary health scores, and custom scoring to generate prioritized alerts and draft Slack messages. This case reveals a new pattern: executives building AI systems for themselves, not their teams.
- Killeen combines meeting transcripts (Granola), CRM health scores (Clary), and custom scoring in Claude Code to monitor 45 active deals
- The system generates prioritized alerts and Slack message drafts — eliminating the need to attend pipeline reviews
- This is not team automation: it's an executive removing intermediaries between raw data and their own judgment
- Conditions for building a personal AI system: repetitive decision cadence with digital data, multi-source synthesis needs, and willingness to define 'good output' criteria
- Tobi Lutke (Shopify) and Jason Calacanis follow similar but different patterns — engineering optimization and team operations, not personal intelligence
Imagine you're the chief product officer at a $2.6 billion company with 45 enterprise deals in play. Normally you'd sit in pipeline review meetings every week to know how each one is going. Instead, you build a program that reads your sales meeting transcripts, checks each account's health in the CRM, scores every deal, and messages you on Slack with the ones that need your attention. That's what Dave Killeen, Pendo's CPO, did. He didn't automate his team — he automated himself.
AI-generated summary
Dave Killeen is Field CPO at Pendo, a $2.6B company. He monitors 45 active enterprise deals. He doesn’t attend pipeline reviews. He built a system with Claude Code that reads meeting transcripts, checks CRM health scores, scores each deal, and sends him a Slack message with what needs his attention. Aakash Gupta documented the case in detail.
One command. 45 deals. Zero review meetings.
Killeen didn’t automate a team process — he automated himself. And that’s what makes this different from nearly every enterprise AI case we’ve covered so far.
45 Deals, One Command, Zero Pipeline Reviews
Killeen’s system has three pieces working together.
Input: Granola transcribes every sales meeting automatically. This isn’t passive recording — Granola generates structured notes including commitments, objections, and next steps. Each transcript feeds the system without manual intervention.
Intelligence: Clary provides health scores for each account — engagement metrics, sales cycle progress, churn risk. Claude Code takes these two sources (transcripts + health scores) and applies custom scoring that Killeen defined according to his own criteria: which signals indicate a deal at risk, which patterns predict close, which accounts need CPO intervention vs. ones the sales team can handle alone.
Output: The system generates two things. First, a prioritized list of deals requiring attention — not all 45, but the 5 to 7 that have something Killeen needs to see. Second, draft Slack messages — not generic ones, but contextualized with the specific information from each deal.
The result: Killeen reviews the output, adjusts what needs adjusting, and acts. The weekly pipeline review — that 90-minute meeting where each deal manager reports status — stopped being necessary for him.
He didn’t eliminate the meeting for his team. He eliminated it for himself. His team still does their work. Killeen simply already knows what he needs to know before anyone tells him.
The Architecture Behind the System
What Killeen built can be described in three layers that apply beyond his specific case:
| Traditional pipeline review | Personal AI system | |
|---|---|---|
| Frequency | Weekly or biweekly | Continuous (whenever new data arrives) |
| Data sources | What each person remembers to report | Transcripts + CRM + automated scoring |
| Latency | 5-14 days between signal and action | Same day |
| Bias | Filtered by who reports and how | Direct data, criteria defined by the executive |
| Time cost | 60-90 min/week + preparation | Minutes of review per day |
The input layer captures data that already exists — meetings that already happened, metrics the CRM already tracks. It doesn’t generate new information. It transforms it from scattered formats (audio, dashboards, emails) into a format the intelligence layer can process.
The intelligence layer is where the executive’s judgment lives in code. Killeen didn’t tell Claude Code “analyze these deals.” He told it “a deal at risk looks like this, a deal that needs my intervention looks like this, and a deal my team can handle has these characteristics.” Defining those criteria is the real work — and it’s work only the executive can do.
The output layer isn’t a dashboard. It’s prepared action. Messages ready to send, not charts to interpret. The difference matters: a dashboard requires the executive to spend time reading and interpreting. A draft Slack message requires them to decide whether to send or modify. The second is orders of magnitude faster.
Not Team Automation — Intermediary Elimination
When we talk about “AI for business,” we usually mean automating processes that teams execute: customer onboarding, report generation, ticket classification. Those are valuable projects. But they’re team projects.
What Killeen did is different. He removed the chain of intermediaries between raw data and his own judgment. The traditional pipeline review works like this: data lives in the CRM. Account managers interpret it. They prepare a summary. They present it in the meeting. The CPO listens, asks questions, and forms an opinion. There are at least three layers of human filtering between the original data point and the executive’s decision.
Killeen’s system compresses that to zero filtering layers. Data flows directly from the CRM and transcripts to the CPO’s system, processed according to criteria he defined himself.
He didn’t replace anyone. His account managers still do their jobs. What he eliminated was the information telephone game — that chain where each link adds interpretation, omits details, and filters based on what they think is relevant.
In our experience at IQ Source, the biggest signal loss in B2B companies doesn’t happen in the systems — it happens in the layers of human summarization between systems and decisions. An executive who receives data filtered through three people has one version of reality. An executive who defines their own filtering criteria and applies them directly to the data has another.
Killeen Isn’t the First — But He’s the Most Specific
The pattern of executives using AI directly didn’t start with Pendo. But each prior case solves a different problem.
Tobi Lutke at Shopify ran autoresearch on the Liquid engine — production infrastructure code optimization. It’s direct AI use by an executive, yes, but applied to engineering, not recurring business decisions. Lutke used an agent for a specific project with a start and end. Killeen built a permanent system that runs every day.
Jason Calacanis with Ultron automated team operations — deal flow, due diligence, portfolio management at his investment fund. It’s closer to Killeen’s case, but Ultron operates as the team’s system, not the leader’s personal system. The difference: Ultron replaces tasks that analysts used to do. Killeen’s system replaces the meeting where he received the information.
And the evolution of business intelligence dashboards created better ways to visualize data for the entire organization. But a dashboard is a shared tool. Killeen’s system is personal — calibrated to his criteria, connected to his channels, optimized for his decisions.
What makes the Pendo case special is its specificity. It’s not “I use AI to be more productive.” It’s “I built a system connecting Granola + Clary + Claude Code to monitor 45 deals against my criteria and send me Slack drafts.” That concreteness is what makes it replicable.
What Kind of Executive Can Do This Today
This doesn’t apply to every executive or every type of decision. Three conditions need to align.
First: a repetitive decision cadence with digital data. Pipeline reviews, forecast updates, budget approvals, portfolio reviews — any recurring meeting where the executive needs an update on the status of multiple items. If the decision happens once a quarter and depends on qualitative factors that aren’t in any system, a personal agent doesn’t help much.
Second: decisions requiring synthesis from multiple sources. If all the information you need is in a single dashboard, you don’t need an AI system — you need to look at the dashboard. The value appears when you need to cross-reference meeting transcripts with CRM data with emails with product metrics. That synthesis is what a human does slowly and a system does fast.
Third: willingness to define what “good output” means. This is the condition most people underestimate. Killeen had to articulate his own prioritization criteria — what makes a deal critical, what signals indicate risk, what health score threshold triggers an alert. Most executives apply those criteria intuitively. Codifying them requires an introspection exercise many haven’t done.
Where doesn’t it work yet? Decisions that depend on personal relationships — knowing that your client’s VP just had a team change and is under pressure, or that there’s internal tension on the account that doesn’t show in any data. Creative strategy decisions — positioning a new product, redefining a market narrative. Those still require human intuition that no system captures yet.
What Changes When Data Arrives Unfiltered
When an executive shifts from receiving filtered information to receiving data processed by their own criteria, the effects go beyond saving themselves a meeting.
Earlier intervention. In a traditional pipeline review, an at-risk deal gets discussed Thursday in the meeting. The CPO learns about it, asks for context, and acts Friday or the following Monday. With a personal system, the risk signal arrives the same day it appears in the data. Killeen can message the account manager on Tuesday about a change detected Monday. The difference between intervening Tuesday and the following Thursday can be the difference between retaining and losing an enterprise account.
Cross-deal pattern detection. An account manager knows their 8 accounts. The CPO, in theory, knows all 45. But in practice, information arrives fragmented — one deal at a time, one summary at a time. Killeen’s system processes all 45 deals every time it runs. It can detect that three accounts in the same segment show the same risk signals simultaneously — a pattern that individual meetings would miss.
Consistent criteria application. Humans are inconsistent. The same deal presented Monday morning gets different attention than the same deal presented Thursday at 4pm. A system applies the same criteria every time. No recency bias, no meeting fatigue, no distraction by whichever deal the most charismatic account manager presents first.
In our experience with B2B clients, many executives discover, when trying to define their criteria for a system, that they didn’t have them as clear as they thought. It ends up being an exercise in clarifying their own decision criteria — and that alone justifies the effort even if the system never gets built.
Building Your First Personal AI System
The code isn’t the barrier. Claude Code is available. Granola has an API. Clary has an API. Any developer can connect the pieces in a couple of days.
The hard part is knowing which process to attack first.
The question that works: “which recurring meeting do I walk into always thinking ‘I should have known this before sitting down’?”
It might be the pipeline review. It might be the forecast meeting. It might be the project portfolio review. It might be the budget approval session. What they all share: the executive needs an update before they can make decisions, and that update arrives during the meeting itself — not before.
That meeting is your starting point. The information you need before walking in tells you what data to capture. The criteria you apply during the meeting tell you what scoring to define. The actions you take afterward tell you what outputs to generate.
Killeen didn’t start by saying “I want an AI system.” He started by saying “I don’t need to be in that meeting if I already have the information.”
Tell us which recurring meeting makes you think “I should have known this before walking in.” We’ll map the data sources, evaluate feasibility, and tell you whether a personal system makes sense for your case. No commitment — just a 20-minute technical conversation.
Schedule a conversation →Frequently Asked Questions
Dave Killeen, CPO at Pendo ($2.6B), built a system using Claude Code that integrates Granola meeting transcripts, Clary CRM health scores, and custom deal scoring. The system generates prioritized alerts and draft Slack messages, allowing him to monitor 45 active enterprise deals without attending weekly pipeline reviews.
Team automation improves processes run by multiple people — approval workflows, report generation, onboarding. A personal AI system removes intermediaries between raw data and an executive's judgment. It doesn't replace people but eliminates the information telephone game: instead of a team preparing a briefing, data arrives directly processed to the leader.
Killeen's system uses three layers: input (Granola for meeting transcription), intelligence (Clary for CRM health scores plus Claude Code for custom scoring and prioritization), and output (prioritized alerts plus Slack message drafts). The specific tools vary by context, but the three-layer architecture — capture, process, act — applies to any executive personal AI system.
Not every executive, but more than most think. Three conditions must align: a repetitive decision cadence with digital data (pipeline reviews, forecasting, approvals), decisions requiring synthesis from multiple sources, and willingness to define what 'good output' means. Creative strategy decisions or those depending on personal relationships aren't good candidates yet.
Related Articles
LiteLLM Attack: Your AI Trust Chain Just Broke
LiteLLM, the AI API key proxy with 97 million monthly downloads, was poisoned via PyPI. Your security scanner was the entry point.
Google Stitch + AI Studio: Design-to-Code Without Engineers
Google shipped a full design-to-production pipeline with Stitch and AI Studio. Where it works for B2B prototypes and where you still need real engineering.