When Vercel announced its Claude Code plugin, the developer community buzzed. The promise was simple: an AI coding companion, integrated directly into Claude, capable of understanding and manipulating your codebase. We envisioned enhanced productivity, faster debugging, and a more intuitive development workflow. What we got instead was a quiet, persistent siphon of user data, masquerading as a feature.
This isn’t just another telemetry blunder; it’s an architectural shift in how AI tools interact with our development environments, and frankly, it’s alarming. The instant you install the Vercel Claude Code plugin, a permanent device UUID lands on your machine. No prompt. No expiration. No rotation. And by default, session starts, tool calls, skill matches — all of it is whisked away to telemetry.vercel.com.
And here’s the kicker: they even built a consent dialog. For prompt text collection. But clicking ‘No thanks’ only stops that specific data stream. All other telemetry keeps humming along, leaving most users under the profoundly mistaken impression that they’ve opted out of everything. It’s a masterclass in dark patterns, disguised as user-friendly design.
The documentation? Buried eight directories deep within ~/.claude/plugins/cache/. In other words, entirely inaccessible to the average user. Documented is not informed.
The Investigator’s Trail
I stumbled onto this not by accident, but by design. I was building a static analysis tool for AI plugins, aiming to scan popular Claude Code skills for security vulnerabilities. My methodology involved regex pattern matching augmented by cross-verification from a dual-LLM setup. It’s a fairly strong approach to sniffing out trouble.
During a large batch scan — two hundred Claude Code skills, specifically hunting for destructive commands, data exfiltration vectors, and the usual prompt injection suspects — my scanner threw a flag. Not on some obscure GitHub repo, but on my own machine, deep within ~/.claude/.
My first thought? A false positive in my own code. Vercel, with its reputation, seemed an unlikely culprit. So, naturally, I pulled the Vercel plugin source, intending to compare it against what I considered ‘known good’ code and pinpoint my analytical error. That’s when the truth began to unravel.
Code Whispers and Data Streams
Examining the Vercel source, I saw file paths and line numbers referencing vercel-plugin v0.32.7, nestled in ~/.claude/plugins/cache/vercel/vercel-plugin/0.32.7/.
// session-start-profiler.mts:702-709
session:device_id // permanent device identifier
session:platform // darwin, linux, win32
session:likely_skills // which skills you use
session:greenfield // whether the project is new
session:vercel_cli_installed // whether you have the Vercel CLI
session:vercel_cli_version // which version
// pretooluse-skill-inject.mts:969-971
tool_call:tool_name // which tool you just called
// pretooluse-skill-inject.mts:1205-1210
skill:injected // which skill got injected
skill:match_type // how it matched
skill:tool_name // against which tool
// user-prompt-submit-skill-inject.mts:1063-1065
prompt:skill // which skill matched your prompt
prompt:score // confidence score
All of this information, the heartbeat of your coding session, funnels into a single, unblinking endpoint: https://telemetry.vercel.com/api/vercel-plugin/v1/events.
And none of it, I must reiterate, asked for your permission.
The Unseen Fingerprint
This is the part that should prompt you to check your own system immediately. Execute this command:
cat ~/.claude/vercel-plugin-device-id
You’ll likely see a string that looks something like this:
473d7060-5a37-4ebb-9082-b09a983c****
A UUID. Generated the moment you installed the plugin. Silently. It has no expiration date. It will never rotate. This ID creates an indelible link between every session, every project, and every client engagement you’ve ever undertaken using Claude Code.
Consider this for a moment: Chrome DevTools, a product built with privacy as a core tenet, rotates its session IDs every 24 hours. Vercel’s device ID, however, is designed to last forever. Many privacy-forward analytics platforms phased out persistent device IDs years ago, recognizing their inherent risks. Yet, here we are, with a tool that ties dozens of telemetry events per coding session to a permanent, immutable fingerprint, all enabled by default.
Is This Actually Legal? (Spoiler: It’s Complicated, But Bad)
Technically, Vercel does have a README.md file. Within it, a ## Telemetry section purports to explain what’s collected and how to disable it. But does anyone truly believe that a hidden README buried deep within a plugin’s cache folder constitutes genuine consent under regulations like GDPR? Free, specific, informed, and unambiguous consent? It’s a stretch, to put it mildly.
Let’s walk through the user experience:
- You install the plugin.
- A generic success message appears.
- You start coding.
At no point is there an explicit query about telemetry. No pop-up. No checkbox. Nothing to indicate that your usage patterns are about to start flowing to Vercel’s servers, tied to a permanent identifier.
And that README? It’s languishing in ~/.claude/plugins/cache/vercel/vercel-plugin/0.32.7/. Eight directories deep. Who browses there? Nobody.
Most serious companies, even nascent startups with far fewer resources than Vercel, adhere to basic privacy principles. Persistent device tracking without an install-time consent prompt is practically unheard of these days. It’s just not standard practice.
This isn’t some fuzzy ethical gray area. This isn’t ‘technically compliant.’ A permanent device UUID, created surreptitiously, linked to every session, with zero disclosure at the point of installation – this is unequivocally a Vercel misstep.
I used this plugin daily for months. I had no clue. And I’m the very developer who was actively building a tool to scrutinize plugin source code. That speaks volumes about the obscurity of this practice.
What’s particularly galling is that the plugin does possess a consent mechanism, albeit a misleading one. There’s a dialog for prompt text collection:
// user-prompt-submit-telemetry.mts:58-61
prompt:text // full prompt content, up to 100KB — OPT-IN ONLY
This explicit opt-in for prompt data, while commendable in isolation, serves only to exacerbate the deception regarding the other, more pervasive telemetry that continues unchecked. Users, seeing any consent dialog, might wrongly assume they’ve managed to opt out of all data collection.
This isn’t just about Vercel. It’s a broader commentary on the increasing opacity of AI tools and the urgent need for transparency in how our development workflows are being monitored and analyzed. When the tools we rely on to build the future are simultaneously building a profile of our present, we have a problem. A big one.
It’s time for Vercel to address this head-on. Delete the Vercel Claude Code plugin from your machine. And demand better.
🧬 Related Insights
- Read more: VMware to KVM: Don’t Break It
- Read more: OpenClaw Taps Your ChatGPT Pro Models — No API Needed