← Back to Blog

Microsoft Listed 10 Ways Your AI Agent Can Go Wrong — Here's the One They Missed

Jonah Schwartz·

Last week, Microsoft published "Top 10 actions to build agents securely with Copilot Studio." It's a good list. Misconfiguration, over-broad sharing, unauthenticated access, weak orchestration controls — these are real problems and Microsoft is right to name them.

I read the whole thing nodding along. Then I got to the end and realized what wasn't there.

The user.

A Platform Admin's Checklist

Microsoft's ten items read like a checklist for IT admins and platform engineers. And that makes sense — Copilot Studio is an enterprise platform, and Microsoft is telling enterprises how to lock it down. Enable DLP policies. Restrict connector sharing. Audit agent activity. All good advice.

But here's the thing: every one of those ten actions is about what the platform should do. What about the person whose Google account is connected? Whose GitHub tokens are stored? Whose Slack workspace the agent is reading?

That person — the actual human in the loop — gets mentioned zero times.

The Blind Spot

This isn't just a Microsoft problem. It's an industry-wide blind spot, and it's getting wider.

Look at what's happening in the MCP ecosystem right now. Traefik Hub just shipped an MCP gateway with built-in auth. Supabase published MCP auth documentation. Cerbos released MCP authorization primitives. MintMCP is building managed MCP infrastructure. Everyone is racing to be the auth layer for AI agents.

And they're all building for the same persona: the platform operator.

Gateways give operators traffic control. RBAC gives operators access control. Audit logs give operators visibility. These are necessary. But they're not sufficient. Because operators aren't the ones whose credentials are at risk.

When your AI agent connects to GitHub on your behalf, it holds an OAuth token scoped to your repos. When it connects to Google, it can read your email. When it connects to Slack, it's in your DMs. The platform admin can see aggregate logs. You can see nothing.

What "User in the Loop" Actually Means

We talk a lot about "human in the loop" for AI safety — making sure a person reviews an agent's decisions before they're executed. That's important. But there's another loop that's being ignored: the auth loop.

Being in the auth loop means:

  • You choose the scopes. Not "this agent wants access to your Google account" — but "this agent wants to read your calendar events for the next 7 days." Granular, specific, understandable.
  • You set the time limit. This token expires in one hour. Or at the end of this session. Not "until you remember to revoke it," which means never.
  • You can revoke instantly. Not "submit a ticket to IT" or "dig through your Google security settings to find which of 30 connected apps is the agent." One click. Done.
  • You see what happened. An audit trail that you can read. Not buried in a SIEM that only the security team has access to. Your tokens, your activity log.

This isn't a nice-to-have. As agents proliferate — the average enterprise ran 14 agents in 2025, and that number is tripling — the number of tokens floating around with access to individual users' accounts is exploding. Each one is a liability that the user can't see and can't control.

Why Platforms Won't Solve This

You might think platforms will eventually add user-facing controls. Maybe they will. But there's a structural reason they won't do it well: platforms are incentivized to make agent adoption frictionless, not to give users reasons to say no.

Every scope selection screen is a moment where a user might grant fewer permissions. Every time-limit option is a chance for a token to expire before the agent completes a task. Every revocation button is a way for a user to break a workflow.

Platform builders optimize for activation and retention. User-controlled auth optimizes for trust. These aren't always the same thing.

That's why the trust layer needs to be independent of the platform. It needs to represent the user's interests, not the platform's growth metrics.

Microsoft's List, Plus One

If I could add an eleventh item to Microsoft's checklist, it would be this:

11. Give users visibility and control over agent-held credentials.

Ensure that end users can see which agents hold tokens to their accounts, what scopes those tokens have, when they expire, and what the agent has done with them. Provide one-click revocation. Make the audit trail user-accessible, not just admin-accessible.

This isn't about undermining platform security. It's about completing it. Platform controls and user controls are complementary. The admin sets the boundaries; the user manages their own credentials within those boundaries. Defense in depth, all the way to the person whose data is actually at stake.

What We're Building

This is exactly what TapAuth is for. We're building the trust layer between humans and AI agents — not another gateway for platform admins, but a layer that puts the user in the auth loop.

When an agent needs access to your service, TapAuth shows you exactly what it's asking for in plain language. You choose the scope. You set the duration. You see the activity. You revoke when you want.

Microsoft's ten actions are a solid foundation. But until the user is in the loop, that foundation has a gap. The agents are already here, already holding tokens to your accounts, already accessing your data.

You should probably be able to see what they're doing.

Learn more about TapAuth →