Why Your AI Agent Shouldn't Have Your Password
In early February 2026, Snyk researchers found over 280 OpenClaw skills on ClawHub that were leaking API keys. Not in some obscure corner of the internet — on the most popular AI agent platform in the world. Skills that developers published and shared, with hardcoded credentials sitting right there in the source code.
The same week, Okta launched "Agent Discovery" — a tool specifically designed to find shadow AI agents in your organization by scanning OAuth consent grants. The fact that Okta built an entire product around this tells you everything you need to know about the state of AI agent security.
We have a problem. And the problem is that we're giving AI agents credentials the same way we gave them to humans in 2005.
The API Key Anti-Pattern
Here's how most AI agent integrations work today:
- Developer goes to a service provider's dashboard
- Generates an API key
- Copies it into an environment variable, a config file, or (god help us) a hardcoded string
- The agent uses that key for every request
- The key never expires
- The key has full access
- Nobody remembers it exists six months later
This is the digital equivalent of giving someone a master key to your building, writing "DO NOT COPY" on it in Sharpie, and hoping for the best.
Why Passwords and API Keys Are the Wrong Model
API keys were designed for server-to-server communication where both sides are controlled by the same organization. They assume a level of trust that doesn't exist when the "server" is an AI agent that might be running code written by someone else, executing prompts crafted by a user, or chaining together tools in ways nobody anticipated.
The problems are fundamental:
No scope limitation. Most API keys are all-or-nothing. Your agent needs to read calendar events? Here's a key that can also delete them, create new ones, and modify sharing settings. Hope it doesn't hallucinate a cleanup task.
No time limitation. API keys live forever unless someone manually revokes them. That key you created for a hackathon demo in January? It's still active. It will be active in December. It might be active when your great-grandchildren are born.
No user involvement. The human is completely absent from the flow. A developer creates the key, and from that point on, the agent operates with those credentials regardless of what any end user thinks about it.
No audit trail. When something goes wrong (and it will), good luck figuring out which agent used which key to do what. API keys don't carry identity. They're bearer tokens in the worst sense.
What the Headlines Are Telling Us
The VentureBeat headline from February 1st nailed it: "OpenClaw proves agentic AI works. It also proves your security model doesn't."
We're in a moment where AI agents are useful. They can code, research, manage calendars, send emails, update CRMs. The capability is real. But the security model hasn't caught up. We're bolting 2020s AI onto 2010s auth infrastructure and acting surprised when credentials leak.
The Register reported that unaccounted-for AI agents are being handed wide access via OAuth tokens — tokens that were granted once and never reviewed. CIO magazine wrote about attackers skipping consent screens entirely by using matched client IDs. These aren't theoretical attacks. They're happening now.
The Alternative: Delegated, User-Approved Auth
The fix isn't complicated conceptually. It's the same pattern the web figured out fifteen years ago with OAuth: don't give the app your password. Let it request specific access, let the user approve it, and make the access revocable and time-limited.
For AI agents, this means:
The agent requests access, not credentials. Instead of storing an API key, the agent makes an API call saying "I need read access to this user's Google Calendar for the next hour." No secrets involved.
The human approves. The user sees exactly what's being requested — which service, what permissions, for how long — and makes an informed decision. Not a "click OK on this wall of text" decision, but a real one with clear scope.
Access is scoped and temporary. The agent gets a token that does exactly what was approved and expires when the time limit is up. No eternal API keys. No over-provisioned access.
Everything is auditable. Every grant, every approval, every token use is logged. When you need to know what happened, you can actually find out.
This is what we built with TapAuth. One API call from the agent. One tap from the user. Scoped, time-limited access with full visibility.
But What About Developer Experience?
The pushback I hear most is: "Sure, but API keys are easy. OAuth is hard."
And traditionally, that's been true. Setting up OAuth is a multi-day project involving client registration, redirect URIs, consent screens, token storage, refresh logic, and scope management. Nobody wants to do that for every integration.
That's exactly why we built TapAuth to handle all of that. From the developer's perspective, it's one API call. Simpler than setting up an API key, actually — because there's no key to generate, store, rotate, or worry about leaking.
The complexity is in the platform, not in your code. That's where it belongs.
The Clock Is Ticking
Every week brings new headlines about AI agent security failures. The 280 leaking OpenClaw skills are the ones we found. How many more are out there? How many API keys are sitting in agent configs right now, one git push away from being public?
The answer to agent credential management isn't better secrets management — it's eliminating secrets from the equation entirely. Let the human approve. Let the access expire. Let the audit trail exist.
Your AI agent shouldn't have your password. It shouldn't have an eternal API key either. It should have exactly the access it needs, for exactly as long as it needs it, approved by exactly the person who should approve it.
That's not a hard sell. That's just how auth should work.