← Back to Blog

Shadow AI Agents Are an Auth Problem, Not a Detection Problem

Jonah Schwartz·

This week, Okta announced a new feature in their Identity Security Posture Management platform: Agent Discovery. The pitch is simple — find the AI agents operating inside your organization that nobody sanctioned. Detect the "shadow AI."

It's a real problem. Developers are spinning up agents that hit APIs with hardcoded credentials, personal OAuth tokens, and shared service accounts. Security teams have no visibility. Compliance is a mess. Okta's response is to detect these rogue agents by monitoring OAuth consent events through their browser plugin.

But here's the thing: detection is treating the symptom. The disease is that authenticating AI agents properly is still way too hard.

Why Agents Go Shadow

Nobody wakes up thinking "I'm going to create a security incident today." Developers build agents that go shadow for one reason: doing auth the right way takes too long.

Think about what it takes to properly authenticate an AI agent with, say, GitHub. You need to register an OAuth app, set up redirect URIs, handle the authorization flow, manage token storage, implement refresh logic, deal with scope management, and wire up revocation. That's a day of work before your agent writes a single line of useful code.

Or you paste in a personal access token and move on with your life. That's what people do. That's how shadow agents are born.

The same pattern plays out with Google, Linear, Slack, and every other OAuth provider. The gap between "I want my agent to access this API" and "my agent is accessing this API with proper auth" is measured in hours or days. So developers skip the auth and ship the feature.

Detection Is Necessary but Not Sufficient

To be clear: Okta is right that enterprises need visibility into what agents are running. Microsoft published their own MCP governance framework this week, and it emphasizes the same thing — you need to know what's connecting to what.

But a world where you detect shadow agents and then... tell developers to go set up proper OAuth flows? That's a world where shadow agents keep getting created. You've added a feedback loop, not removed the incentive.

It's like putting up speed cameras but not fixing the road. People will still drive too fast because the road is designed for speed. The fix is redesigning the road — making the right thing the easy thing.

Make Auth Easier Than Skipping Auth

This is why we built TapAuth. The thesis is simple: if authenticating an AI agent is a single API call, nobody will bother with hardcoded tokens.

Here's what that looks like in practice. Your agent needs a GitHub token with repo scope:

POST /api/v1/grants
{
  "provider": "github",
  "scopes": ["repo"]
}

That's it. The user gets a link to approve the access. They see exactly what scopes are requested, by which agent, and they can set a time limit — one hour, one day, one week. When they approve, the agent gets a token. When the time expires or the user revokes, the token dies.

No OAuth app registration. No redirect URI management. No token storage. No refresh logic. One API call to request, one API call to poll for the token. The entire integration is ten lines of code.

The Trust Layer, Not Just the Auth Layer

What makes this different from "just simplify OAuth" is the trust model. TapAuth isn't just plumbing — it's a trust layer between humans and AI agents.

When a user approves a grant through TapAuth, they're making an informed decision. They see the agent's name, the requested scopes, and they choose the duration. They can revoke at any time from a dashboard that shows every active grant. There's a complete audit trail of what was requested, when it was approved, and when it was used.

This is the visibility that security teams actually want. Not "we detected an agent using an OAuth consent" — but "here's every agent that has access, what it can do, who approved it, and when that access expires."

Okta's detection tells you about the problem after it exists. A proper trust layer prevents the problem from existing in the first place.

The Real Fix for Shadow AI

The industry is converging on a few truths:

  • MCP standardizes how agents talk to tools — but doesn't handle who's allowed to talk
  • OAuth 2.1 is the right auth protocol — but it's too complex for most agent developers to implement from scratch
  • Detection catches problems — but doesn't remove the incentive to create them
  • The missing piece is a trust layer — something that makes proper auth the path of least resistance

Shadow AI agents will keep proliferating as long as the fastest way to ship an agent is to skip auth. The answer isn't better detection. It's making auth so easy that there's no reason to skip it.

That's what we're building at TapAuth: the trust layer between humans and AI agents. Because the best security is the kind developers actually use.