Unsanctioned AI tool usage by employees is rapidly expanding the attack surface of organizations, creating critical security blind spots that require immediate and informed management to balance productivity and risk.
The adoption of AI tools has been moving at an unprecedented pace. Your organization has likely already officially adopted at least a few LLMs, agentic AIs, enterprise search tools, and more.
But are those sanctioned apps the only AI tools currently active in your environment? If you’re like most organizations, the answer is likely “no.” And that could be a problem.
The productivity paradox
Your employees are using AI tools faster than your IT and infrastructure teams can approve and purchase them. Your employees aren’t intentionally being reckless, of course. They’re simply trying to find the most effective and efficient ways to get their jobs done. The marketing team uses AI to draft campaign copy. Your engineers use it to review code and debug issues. Sales reps use it to summarize customer calls and draft follow-ups.
This is natural, inevitable, and on many levels, it’s a good thing–today’s AI tools can unlock significant productivity gains when used responsibly.
But along with those productivity gains comes risk. When your employees connect to these tools through OAuth grants, they’re giving scopes often above and beyond what the tools need. Email, calendar, Drive files, contacts–sometimes with read access, sometimes with the ability to send messages on behalf of the user.
Your employees aren’t thinking about the security implications when they click “Allow” on that OAuth prompt. They’re thinking about how to get their work done as quickly as possible. But that means it’s incumbent upon you and the security team to try to manage these risks as carefully and thoroughly as possible.
The growing blind spot
In a recent round of research Material conducted among CISOs and heads of corporate security, AI governance and managing AI risk emerged as the top priority for more than half of those surveyed: and our survey is far from the only research to surface this concern.
The CISOs we spoke with mentioned both end-user apps as well as managing the sensitive data being ingested into enterprise AI tools. We recently discussed the latter. But on the user side, getting a handle on AI tool usage has become critically important. Of course no leader wants to impede the productivity gains their users are getting out of AI tools, but there is a glaring need to close the visibility gap.
Most organizations have no systemic way to answer basic questions like:
- Which AI tools have employees connected to our environment?
- What levels of access do these tools have?
- Which ones can read email? Access files? Send messages?
- Are any of these tools from vendors we haven’t vetted?
- How many employees are using the same AI tool, and what data are they collectively exposing?
The native controls in Google Workspace and Microsoft 365 offer some visibility, but limited capacity to operationalize that information. You can see the lists of OAuth grants if you know where to look, but parsing through hundreds or thousands of entries and users to understand the actual risk–that’s a completely different challenge.
The math is straightforward: if your employees’ AI tool adoption rate has grown along with the global average, then it’s grown anywhere between 50 and 400% in the last 12 months. Your unintentional insider risk surface has likely grown at the same rate—a velocity we simply haven't seen with previous shadow IT trends.
And here’s the uncomfortable truth: this isn’t just a theoretical concern. Without visibility into shadow AI across your environment, you very likely could have unmanaged apps with full read and write access to employees email, calendar, and Drives.
The wrong approach: blocking everything
When we talk to security teams about this problem, there’s often an immediate impulse to block all AI tools. Lock down OAuth permissions, and forbid employees from connecting anything new.
We understand the instinct. And frankly, in certain situations, that’s the right approach. But most of the time, that doesn’t actually fix the core problem–and in certain situations, it just makes the problem worse.
If you block everything, your employees will find workarounds. They’ll use personal accounts, they’ll share sensitive files with their personal emails, or copy data outside of the environment. You won’t eliminate the risk–at best you’ll slightly decrease the volume, while making it significantly harder to both detect and remediate.
Not only that, but you’ll slow down legitimate productivity gains and generate more friction that damages your relationship with the business. Security teams that are perceived as the department of “no” tend to lose influence and shunt users into workarounds by default.
The right approach isn’t to block everything. It’s to get visibility and then make informed decisions about what to allow, what to restrict, and how to manage the risk of what you are allowing.
From Visibility to Action
So what does effective visibility look like? It starts with continuous detection of the tools your employees are using, including inventories of all third-party applications connected to your environment. Not just the applications you know about and approved, but everything–whether your employees are OAuthing into them or signing up with app-specific passwords.
This is where Material’s approach shines. For each connected application, Material provides visibility into who’s using it, how they’re connecting, the scope of the connection for OAuth access, and more. This changes the conversation from “we think employees are using unsanctioned AI tools,” to “we know exactly how many are using them, which they are, how they’re connecting, and which permissions have been granted.”
Visibility provides the foundation, but it’s not the end goal. Responsible management of AI tools that balances security and productivity within the organization’s risk tolerance is the ultimate destination.
And here’s what it looks like in practice:
- Establish a baseline. Using Material’s App Explorer, understand the current state. Which AI tools are already connected, how are they connected, and who’s using them? This isn’t about blame–it’s about understanding reality.
- Categorize and prioritize. Not all AI tools pose the same risk. A grammar checker with read-only access to draft documents is very different from a general AI assistant with full access to email and files. Focus your attention where it matters most.
- Enable safe usage. If employees are using particular tools, it’s often because they have a legitimate need. Sanction those apps that are safe for your environment, and investigate safe alternatives for those that aren’t.
- Define acceptable use policies. Based on what you’ve discovered, establish clear policies about which types of AI tools are acceptable, which require review, and which are prohibited. Make these policies practical and highly visible.
- Monitor and enforce. Use Material’s continuous monitoring to detect when new AI tools are connected, and set up automated workflows to flag applications for review, and block those that fall outside your risk tolerance. This can’t be a one-time audit: it’s an ongoing process.
The path forward
Managing AI use isn’t about preventing innovation or efficiency–it’s about making informed decisions with complete information. Your employees are going to use AI tools: that’s not a problem to eliminate, it’s a reality to manage.
The question is whether you have the visibility to understand what’s connected to your environment and the controls to manage it responsibly.
Material’s approach to cloud workspace posture management provides that foundation: continuous discovery of third-party applications, detailed information about the permissions and risk, and flexible controls to enforce your policies without blocking legitimate productivity.
Want to see what's already connected to your cloud workspace? Material Security provides continuous discovery and management of third-party applications, OAuth grants, and other posture risks across Google Workspace and Microsoft 365. Request a demo to learn more.