Skip to main content

AI Agents This Week: Products, Acquisitions, and Risks

Perplexity Computer, Anthropic acquires Vercept, the OpenClaw security crisis, and NIST agent standards. What these stories mean for your B2B company.

AI Agents This Week: Products, Acquisitions, and Risks

Ricardo Argüello

Ricardo Argüello
Ricardo Argüello

CEO & Founder

AI & Automation 7 min read

This Week, AI Agents Stopped Being Demos

Three things happened in 48 hours that would have taken months a year ago. Perplexity launched a commercial product orchestrating 19 AI models to execute full tasks. Anthropic acquired a startup specializing in agents that operate computers. And a security report revealed that the plugin marketplace of one of the most popular open-source agents contains over 340 malicious skills.

AI agents are no longer a keynote promise. They’re products with price tags, eight-figure corporate acquisitions, and — predictably — real attack vectors. At IQ Source, we track these stories weekly because they directly affect the technology decisions of the companies we work with. Here’s our read on what happened and why it matters.

If you’re still figuring out how AI agents fit into your operations, our practical agent playbook for decision-makers gives you the full framework.

Perplexity Computer: 19 Models, One Command

On February 25, Perplexity announced Computer, a system that goes beyond search: it executes complete tasks on your computer. The user describes what they want — “organize last month’s expenses into a spreadsheet” — and the system plans the steps, selects the right models, and executes.

What’s interesting isn’t that an agent can operate a computer. Anthropic demonstrated that with Claude computer use. What makes Perplexity Computer different is multi-model orchestration. Instead of asking one model to do everything, the system assigns each part of the task to the model best suited for it.

Which Model Does What

ModelRole
Claude Opus 4.6Complex reasoning and task planning
GeminiResearch and information synthesis
Nano Banana ProImage generation and editing
Sonar (Perplexity’s own)Real-time web search
Additional modelsCode, document analysis, data processing

The price: $200/month on the Max tier, with 10,000 monthly credits. According to PYMNTS, tasks cost between 1 and 5 credits depending on complexity.

What This Means for B2B Companies

The multi-model orchestration pattern is exactly what we described in our enterprise agents playbook: don’t use one model for everything — use the right one for each step. Perplexity is packaging that pattern as a consumer product.

For enterprises, the difference is that you can’t route client data, contracts, or financial information through a $200/month SaaS product without security controls, audit trails, or regulatory compliance. The orchestration pattern is sound. The implementation requires your own models, your own infrastructure, and your own rules.

Anthropic Acquires Vercept: The Bet on Computer-Operating Agents

The same day Perplexity launched Computer, TechCrunch reported that Anthropic acquired Vercept, a San Francisco startup that built “Vy” — an agent capable of operating Mac computers remotely, including opening applications, using interfaces, and executing complete workflows without human intervention.

Vercept had raised $50 million from investors including Spark Capital and Y Combinator, with notable angels from the AI ecosystem. The entire team joined Anthropic, and Vercept will shut down operations on March 25.

This isn’t the first time Anthropic has bought talent instead of building in-house. In December they acquired the Bun team, the JavaScript runtime. The pattern is clear: Anthropic is assembling the full stack for AI agents — from the base model to the execution layer that interacts with the real world.

For companies planning their integration strategy, this acquisition reinforces what we’ve been saying: the market is moving toward agents that operate software directly. But the cleanest, most controllable path for most enterprises is still connecting agents to their systems through APIs and standardized protocols like MCP, not having an agent drive the GUI. If you want to explore that route, we have a team dedicated to MCP server development that connects agents to enterprise systems securely.

OpenClaw: What Happens When Security Can’t Keep Up

While Perplexity and Anthropic made their announcements, a less glamorous but more urgent report was circulating in the security community. OpenClaw — the open-source agent that went viral and amassed millions of installations, whose founder recently joined OpenAI — has a serious security problem in its plugin ecosystem.

According to a CNBC analysis based on Cisco research, the OpenClaw skills marketplace is compromised in multiple ways:

  1. 341+ malicious skills identified — roughly 20% of the total marketplace. They include code that exfiltrates user data, installs backdoors, or redirects queries to external servers.
  2. CVE-2026-25253 (CVSS 8.8) — a remote code execution vulnerability that allows an attacker to run arbitrary commands on the user’s machine through a seemingly legitimate skill.
  3. 30,000+ exposed instances — according to The Hacker News, tens of thousands of OpenClaw installations are accessible from the internet without authentication, exposing data and enabling remote execution.
  4. Exfiltration via messaging — some skills send user data to Slack channels, Discord, or external webhooks without the user’s knowledge.

Why This Matters for Your Company

OpenClaw is a real-time case study of what happens when an agent ecosystem grows faster than its security controls. And it’s not an isolated case — it’s a preview of what can happen with any agent platform that allows third-party extensions without rigorous review.

For B2B companies, the lesson is direct: when evaluating AI agents, the security review can’t stop at the base agent. You need to audit every plugin, skill, or extension connected to the system. This applies to open-source tools and commercial products with integration marketplaces alike.

If your team is already using or evaluating AI code generation tools, our guide on AI-generated code security covers the controls you should have in place.

NIST Puts the Rules on the Table

While the industry builds and acquires at full speed, the U.S. government started bringing order. On February 17, NIST launched an AI agent standards initiative through its Center for AI and Security Intelligence (CAISI).

The initiative has three pillars: industry-led interoperability standards, open-source protocols for agent communication, and security and identity research for autonomous systems. According to SiliconANGLE, NIST published a Request for Information (RFI) on agent security with a March 9 deadline.

This isn’t academic theory. For any company selling to the federal government, defense contracts, or regulated industries like healthcare and finance in the U.S., NIST standards become compliance requirements. If you’re building or deploying agents today, it’s worth designing with these frameworks in mind — building ahead is cheaper than retrofitting later.

What All of This Means for Your Company

Agents are products, not prototypes. Perplexity is selling access to a multi-model agent for $200/month. Anthropic paid tens of millions to acquire an agent team. The market has decided that AI agents are the next layer of enterprise software — the question for your company isn’t whether to adopt them, but how and with what controls.

Agent security is your problem. OpenClaw proved that a plugin marketplace can become a massive attack vector within months. If your team uses or plans to use agents with third-party extensions — and nearly every AI agent has them — you need a security review process that covers not just the base agent but every integration. The story of WebMCP and open standards for web agents, which we covered in our WebMCP standard analysis, shows why open protocols are preferable to closed plugin ecosystems.

Standards are coming — build forward, not backward. The NIST initiative signals that the regulatory framework for AI agents is on its way. Companies that design their implementations with auditing, traceability, and access controls from day one will spend less on adaptations when standards are finalized. Those that don’t will face costly rework.


This kind of weekly analysis is what we do at IQ Source to keep up with a fast-moving ecosystem. If your leadership team needs this read tailored to your industry and specific technology decisions, let’s talk.

Frequently Asked Questions

AI agents Perplexity Computer AI security Anthropic NIST enterprise automation OpenClaw

Related Articles

LiteLLM Attack: Your AI Trust Chain Just Broke
AI & Automation
· 7 min read

LiteLLM Attack: Your AI Trust Chain Just Broke

LiteLLM, the AI API key proxy with 97 million monthly downloads, was poisoned via PyPI. Your security scanner was the entry point.

AI security software supply chain LiteLLM
Google Stitch + AI Studio: Design-to-Code Without Engineers
AI & Automation
· 7 min read

Google Stitch + AI Studio: Design-to-Code Without Engineers

Google shipped a full design-to-production pipeline with Stitch and AI Studio. Where it works for B2B prototypes and where you still need real engineering.

Google Stitch vibe coding vibe design