AI Dev Tools

Pst CLI: Secure Secrets for Agents

Developers face a recurring nightmare: API access that demands secrets, and agents that cheekily ask you to paste them into chat. A new CLI tool called `pst` aims to fix this.

Screenshot of a terminal window showing the pst CLI installation command and output.

Key Takeaways

  • `pst` is a new CLI tool designed to securely inject secrets into agent workflows.
  • It prevents sensitive data from being pasted directly into chat windows by using clipboard capture and environment variable injection.
  • Currently, `pst` is macOS-only, but the core concept is platform-agnostic and could see wider adoption.

Secret handling is still a mess.

For anyone trying to wrangle an agent with an API that’s stubbornly MCP-averse, the experience can be brutal. You’re left staring at a broken shell-scripting UX, or worse, the agent helpfully suggests you just, you know, paste your sensitive API key directly into the chat window. Because nothing screams security like broadcasting your credentials in plain text.

This is precisely the frustration that led to the creation of pst, a minimalist Command Line Interface tool designed to be driven by your agent. It’s not about reinventing the wheel; it’s about applying a sensible patch to a gaping hole in developer workflow. The core function is simple: pst grabs whatever is currently on your clipboard, discreetly tucks it away into your system’s Keychain (think macOS’s secure storage), and then hands that sensitive value off to commands via environment variable injection. It’s a magician’s trick for secrets, making them appear where needed without ever touching the chat log.

I got sick of caving and “rotating later” (which… may or may not have happened). So I built pst, a tiny CLI your agent can drive. It grabs your clipboard, tucks it into the Keychain, and hands the value to commands through env-var injection so it never lands in chat.

This approach works across various agent environments, including Claude Code and anything else that exposes a shell. The aim is to provide a clean, secure, and non-invasive way for agents to interact with sensitive data. It’s a subtle but important improvement, especially as agents become more integrated into the development lifecycle.

Getting it up and running is straightforward, at least for macOS users, which is the current platform. A simple curl command pulls down a single bash script that gets dropped into /usr/local/bin and even installs a SKILL.md file for Claude Code, should it be detected. It’s a lean installation process for a lean tool.

curl -fsSL https://raw.githubusercontent.com/amerry19/pst-cli/main/install.sh | bash

The author openly invites folks to check out the source code on GitHub. A star on the repository is, of course, always appreciated. This is the kind of pragmatic tooling that often gets overlooked in the race for the next big AI paradigm shift, but it’s precisely these small wins that smooth out the rough edges of our daily grind.

Why This Matters Beyond Convenience

Look, it’s easy to dismiss pst as just another utility. Another piece of code to manage. But that’s precisely the point. The real promise of AI agents in development isn’t just about generating code; it’s about integrating them into existing, often messy, workflows. And the current state of secure credential management when interacting with these agents is, frankly, embarrassing. We’re talking about systems that can compose symphonies but still struggle with basic secrets hygiene.

This isn’t just about avoiding accidental leaks. It’s about fostering trust. Developers need to be able to rely on their tools, AI-powered or otherwise, to handle sensitive information responsibly. When an agent asks you to paste a password or an API key into a chat, it erodes that trust. It screams, “I don’t fully understand the implications of what I’m asking for.” pst offers a small but vital step towards bridging that gap, demonstrating that we can build AI integrations that are both powerful and secure.

This is reminiscent of early web development, where rudimentary security measures were bolted on as an afterthought. We’re seeing a similar pattern emerge with AI agents. The excitement of what they can do often overshadows the mundane but critical question of how they should do it. pst is an early signal that developers are already identifying and fixing these security blind spots. It’s the digital equivalent of putting a lock on the filing cabinet after the first few reports went missing.

What’s Next for pst?

Right now, pst is a macOS-only affair, which limits its immediate reach. The prompt for cross-platform support is obvious. Linux and Windows users are left out in the cold, at least for now. If this tool gains traction—and it deserves to—expanding its compatibility will be the natural next step. The underlying concept of securely injecting secrets via environment variables is platform-agnostic. The implementation, however, requires careful consideration of each OS’s native credential storage and shell integration mechanisms.

Beyond platform expansion, one wonders about deeper integration. Could pst become a de facto standard for agent credential handling? Could it be part of a larger security suite for AI-assisted development? These are questions for the future, but the groundwork is laid. It’s a proof to the power of identifying a specific pain point and crafting a targeted, elegant solution. Don’t let your agents become liability vectors. Keep building, and keep it secure.


🧬 Related Insights

Alex Rivera
Written by

Developer tools reporter covering SDKs, APIs, frameworks, and the everyday tools engineers depend on.

Worth sharing?

Get the best Developer Tools stories of the week in your inbox — no noise, no spam.

Originally reported by dev.to

Stay in the loop

The week's most important stories from DevTools Feed, delivered once a week.