Skip to main content

'Vibe Coding Cleanup Specialist' is now a LinkedIn title

Lovable leaked source code and Supabase service_role keys from pre-Nov 2025 projects via BOLA. 'Vibe Coding Cleanup Specialist' is a real LinkedIn job title now.

'Vibe Coding Cleanup Specialist' is now a LinkedIn title

Ricardo Argüello

Ricardo Argüello
Ricardo Argüello

CEO & Founder

Business Strategy 10 min read

I opened LinkedIn late Tuesday and found something I thought was a joke until I checked the profiles. A job title that turned out to be literal.

“Vibe Coding Cleanup Specialist” as a professional headline. Not on a parody account, not inside an ironic post. On real profiles, with photos, followers, and work history.

Jonathan Lin, based in Kuala Lumpur, lists himself as “Overengineering & Vibe Coding Cleanup Specialist.” Hamza Gulraiz in Lahore combines “Vibe coding cleanup specialist” with React Native Developer. Nishant Gohel in Bangalore has 22,000 followers and uses the same title. Wilmer Mauricio Herrera Rentería in Quito goes with “Vibe Coding Fixer | Software Architect | MSc in AI.” Victor Istrati from Chișinău, Moldova introduces himself as “Senior Web Engineer | Vibe Coding Cleanup Specialist.”

It isn’t regional. It isn’t a cohort of interns trying to be clever. These are experienced engineers on four continents who decided the segment is large enough to put at the top of a profile.

Pascal Bornet put it on the front page of LinkedIn less than 24 hours ago. The line people kept quoting from his post wasn’t the ironic one. It was the operational one: vibe coding didn’t kill the developer, it moved the developer into the cleanup layer. Something had to happen this same week for five random engineers in five countries to pick the title simultaneously. What happened has a name, a vector, and a report number.

Lovable leaked everything. That’s why the title appeared.

On Monday, April 20, Alex Turnbull — founder of Groove — published a triage guide on LinkedIn about Lovable, the prompt-to-app platform. The headline he used was plain: Lovable is experiencing a mass data leak. If your team built anything on Lovable before November 2025, anyone with a free account can read your source code, your database credentials, and your AI chat history.

The mechanism is BOLA (Broken Object Level Authorization). Dvir Arad, Tech Lead at Lumana AI, summarized it in the comments: it’s the most common vulnerability in any multi-tenant system and it isn’t an AI-specific problem. Lovable’s API validates the auth token — it knows who you are — but it doesn’t validate whether you own the project you’re asking for. Valid token + someone else’s project ID = 200 OK with everything in it.

The affected endpoints are specific: /projects/{id}/*, /git/files, /git/file, and /documents. A researcher under the handle weezerOSINT created a free account and pulled the full source tree of an admin panel built for Connected Women in AI, a real Danish nonprofit. 3,703 edits this year, last touched ten days ago. Inside the source: SUPABASE_URL, SUPABASE_PUBLISHABLE_KEY, and SUPABASE_SERVICE_ROLE_KEY, all hardcoded. Real speakers from Accenture Denmark and Copenhagen Business School exposed by name, company, and LinkedIn profile.

The report landed in Lovable’s HackerOne program on March 3 as #3583821. Lovable shipped a fix, but only for new projects. A project created in April 2026 returns 403 Forbidden. A project from October 2025 still returns 200 OK. Lovable calls that “intentional behavior” and recommends marking the project private — a paid-tier feature. 48 days after the report, the ticket is still open.

This is Lovable’s second major security incident in twelve months. Per Turnbull, the exposed accounts include employees at Nvidia, Microsoft, Uber, and Spotify.

The single phrase in this story that should matter most to you is this one: SUPABASE_SERVICE_ROLE_KEY.

Supabase has a two-layer security model. The layer you normally protect with Row Level Security (RLS) enforces rules like “this user can only see their own rows.” The anon key and the publishable key are subject to RLS. The service_role key is not — it bypasses the entire security policy of the project. It is the master key.

That key exists so your backend can do admin operations the end user shouldn’t be able to do. It’s meant to live on the server, never in the client, never in a repo, never in a chat. In vibe coding the flow is different. You ask Lovable to add authentication or build an admin dashboard, Lovable generates code with the service_role key hardcoded directly in the project repo, and that project ends up exposed through BOLA.

Mikhail Filatchev put it in the comments of Turnbull’s thread with uncomfortable clarity: the hardcoded service_role key issue is its own problem. It isn’t a Lovable bug, it’s a credential hygiene failure that vibe coding actively encourages. Paste your keys into an AI chat to debug, and that context gets stored somewhere you don’t control.

The structural failure isn’t just that the key leaked. It’s that the key was in frontend code to begin with, because that’s how the agent decided to build it and nobody reviewed the decision. If your company built anything on Lovable before November 2025, the service_role key for that project should be treated as public since at least 48 days ago. That’s the optimistic scenario.

Fourth breach of the quarter, same pattern

If you only see the Lovable case, it’s weekend drama. If you look at the quarter, it’s the fourth point on the same line.

In March, TeamPCP poisoned LiteLLM on PyPI by first compromising Trivy, Aqua Security’s scanner. 97 million monthly downloads. The tool that was supposed to protect the pipeline became the entry vector.

On April 1, Mercor exposed 4 TB of biometric data from 600,000 people who had interviewed with Anthropic, OpenAI, xAI, and Meta. Data that, by definition, cannot be rotated.

On April 19, Vercel was compromised via an OAuth grant to Context.ai, a third-party AI agent with access to an employee’s Google Workspace. The attack didn’t touch the model. It walked through the environment.

Now, April 20, Lovable via BOLA. Four different vectors in a single quarter: package supply chain, biometric dataset with no rotation path, third-party AI OAuth, broken authorization on a no-code platform. The common denominator isn’t AI. It’s the speed at which these platforms jumped from prototype to production without the discipline any 2015-era SaaS had to learn the hard way.

The same Dvir Arad I mentioned above puts it like this: when he worked on production microservices, the default assumption was that every object ID in a URL is attacker controlled. That mindset is what’s missing in a lot of the new stack.

Why the title appeared this week

Pascal Bornet named it first on LinkedIn: vibe coding didn’t kill the developer, it turned the developer into the cleanup layer for someone else’s decisions.

Guy Pistone, CEO of Valere, added a line in the same thread that’s worth repeating: vibe coding moved the complexity downstream. Closer to production, further from whoever wrote the prompt. That isn’t elimination of work. That’s outsourced accountability.

AI code generation does for coding what the calculator did for arithmetic — it drives down the cost of the trivial operation. What it doesn’t do is substitute judgment on what’s safe to ship. Auditing a BOLA, rotating credentials, cleaning up a leaked service_role key, rebuilding an authorization model from zero — all of that is still senior engineering work. It just now arrives wrapped in a repo someone else prompted and declared done.

The Vibe Coding Cleanup Specialist exists because the demand appeared. The demand doesn’t come from curiosity. It comes from companies that scaled from 3 to 30 projects on vibe coding with no governance and discovered, one Monday at 3 AM, that those projects had the service_role key in the frontend. The LinkedIn title is the public signal that the market already exists.

The economics changed and nobody has billed you yet

The original promise of vibe coding was build anything in an afternoon. That was true as long as the alternative was building nothing. Today the alternative is different: build in an afternoon and pay three times that later to clean it up.

Run the numbers with a concrete example. An admin dashboard with authentication, built in Lovable over a weekend, costs nothing in engineering time. The same dashboard, with the service_role key leaked and a post-hoc audit to determine whether someone exfiltrated data over the last 48 days, costs between five and fifteen thousand dollars per project. Rotate credentials, inspect Supabase logs, re-notify affected users, update the privacy policy, rewrite the admin backend with real authorization. All of that is billed at specialist rates.

The moat that matters in 2026 is no longer how fast you can build. It’s whether you can build without someone having to clean up after you in six months. Two companies shipping an MVP on Lovable in the same month have the same time-to-market. The one that writes the dashboard with real architecture from the first commit doesn’t pay the cleanup tax in month eight.

The common miscalculation is treating speed like it’s free. It isn’t. Prompt-built speed comes with a deferred liability, invisible on the balance sheet until an API returns 200 OK to a stranger.

If you use Lovable or equivalent, three things this week

If your company built any project on Lovable before November 2025, this week’s work is mechanical and it doesn’t wait.

Rotate the SUPABASE_SERVICE_ROLE_KEY first. That’s the master key and the absolute priority. Then rotate every other credential that ever lived in that project: connection strings, anon keys, publishable keys, and any third-party API key (Stripe, SendGrid, OpenAI, auth providers) that was ever pasted into a Lovable chat. Any credential that touched an AI chat should be treated as exposed — there’s no way to know where that conversation ended up stored.

Audit Supabase logs for the last 48 days for anomalous reads on user, billing, or admin tables. If you see queries from IPs or session tokens you don’t recognize, you’re looking at exfiltration, and regulatory notification shifts from optional to mandatory depending on jurisdiction. Costa Rica’s Ley 8968 requires a PRODHAB notification within five days. The EU’s GDPR demands 72 hours.

Confirm whether your project is actually affected. Open the Lovable project URL in an incognito window or under a different free account. If you can see the code or the chat history, any stranger can too. Projects created after November 2025 return 403 Forbidden. Older projects return 200 OK with the full repo.

If the audit confirms exposure, the next question is whether keeping that project on Lovable makes sense. The decision depends on the volume of data compromised and your company’s appetite for running on a vendor with a HackerOne ticket open for 48 days. For continuous verification after rotation, you can use our API Security Scanner, which flags authorization problems and credential exposure in public endpoints.

The cleanup market already exists

The Vibe Coding Cleanup Specialist market is real, it has public profiles, and it’s going to grow as long as no-code platforms keep shipping projects to production with the service_role key exposed in the frontend. The engineers positioning themselves there are making a correct calculation about where demand is.

IQ Source operates on the other side of the problem. Our position isn’t to clean up after the prompt. It’s to build the project once, with real authorization from the first commit, so that six months later there’s nothing to clean up. It’s the less glamorous side of the work and the only one that doesn’t generate a 48-day HackerOne ticket.

On Tuesday night, a stranger with a free Lovable account downloaded the source code of a Danish nonprofit without writing a line of exploit. By Wednesday the five engineers I saw on LinkedIn had more work than they could take on. Both are consequences of the same calculation most people still haven’t made.

Frequently Asked Questions

Lovable BOLA vibe coding Supabase API security vendor governance service_role key

Related Articles

The runtime is a commodity now. The moat is the workflow.
Business Strategy
· 8 min read

The runtime is a commodity now. The moat is the workflow.

Anthropic prices agent runtime at $0.08/hour and wipes out a cohort of infra startups. McKinsey: 80% of firms still see no AI impact on earnings.

Anthropic Managed Agents McKinsey
OpenAI doubled prices while Nvidia cut inference 35x
Business Strategy
· 14 min read

OpenAI doubled prices while Nvidia cut inference 35x

GPT-5 launched at $1.25 per million input tokens. GPT-5.5 costs $5.00 today. 4x cumulative in 8 months while Blackwell Ultra cut inference 35x.

OpenAI GPT-5.5 pricing power