Shadow AI May 5, 2026 9 min read

The AI Tools Employees Use Without Telling IT (And What to Do About Each One)

The most common shadow AI tools aren't obscure — they're tools your employees find through a Google search, a LinkedIn post, or a recommendation from a colleague. Here's a plain-language breakdown of what the most widely used tools actually do with your company's data, and how to handle each one in your AI policy.

Note: AI tool data handling terms change frequently. The information below reflects publicly available terms and privacy policies as of early 2026. Always verify current terms directly with each vendor before finalizing your policy tier assignments.

Building a tool tier list for your AI policy requires knowing what each tool actually does with the data employees enter. Most employees don't read terms of service. Most HR managers don't either. This guide cuts through to what actually matters for policy purposes: whether inputs are used for model training, whether enterprise agreements change that, and what risks each tool category creates.

223 average data policy violations per month that organizations detect tied to AI tool usage — most involving tools employees chose themselves without IT review. Netskope Cloud and Threat Report, 2026

The tools showing up most in enterprise environments

ChatGPT (OpenAI)
Free tier: Tier 2 / Limited · Paid Enterprise: Tier 1
What employees use it for: drafting, research, summarization, coding, analysis
What it does with your data: ChatGPT Free and Plus tiers allow OpenAI to use conversations to improve their models unless users actively opt out in settings — a step most users never take. ChatGPT Enterprise and Team plans include contractual commitments that data is not used for training and is not retained beyond the session. The difference between free and paid tiers is not just features — it's a fundamentally different data relationship.
Policy verdict: Tier 2 (limited use, non-sensitive data only) for free/Plus accounts. Tier 1 for ChatGPT Enterprise with a signed data processing agreement. Make the free/paid distinction explicit in your tier list — many employees assume all versions have the same data protections.
Microsoft Copilot
Enterprise: Tier 1 · Consumer: Tier 2
What employees use it for: drafting in Word/Outlook/Teams, meeting summarization, data analysis in Excel
What it does with your data: Microsoft Copilot for Microsoft 365 (the enterprise version integrated into your Microsoft 365 tenant) operates under Microsoft's commercial data protection commitments — inputs are not used for model training, data stays within your tenant. The consumer version of Copilot (accessed via Bing or copilot.microsoft.com without an enterprise license) has different and less protective terms.
Policy verdict: Tier 1 for Copilot for Microsoft 365 if your organization has a Microsoft 365 Business or Enterprise subscription. Tier 2 for consumer Copilot access. Employees at Microsoft 365 organizations often don't know the distinction — your tier list should name both explicitly.
Grammarly
Requires review: may process all written content
What employees use it for: proofreading, tone adjustment, writing improvement across email, documents, and web
What it does with your data: Grammarly's browser extension and desktop app process everything employees type in real time — including emails, internal documents, CRM entries, and anything else they write. Grammarly's business and enterprise plans include stricter data commitments. The free and personal premium tiers have broader data collection and use rights. The risk here isn't just intentional sharing — it's passive, automatic transmission of everything written.
Policy verdict: This is one of the most underappreciated shadow AI risks because employees don't think of Grammarly as an "AI tool" — they think of it as a spell checker. Grammarly Business/Enterprise can be Tier 1 or 2 depending on your data sensitivity requirements. Free Grammarly on company devices or for company work should be Tier 3 in regulated industries. Address it by name in your policy.
Otter.ai / Fireflies.ai / Similar meeting transcription tools
Tier 2 at best · High-sensitivity meetings: Tier 3
What employees use it for: recording and transcribing meetings, generating meeting summaries and action items
What it does with your data: Meeting transcription tools record, store, and process the full audio and text of meetings — including attendees, topics, decisions, and any sensitive content discussed. Free tiers of these tools often retain transcripts on their servers, may use recordings for product improvement, and provide limited control over data retention. The risk surface is uniquely broad because one employee enabling a recording bot exposes all meeting participants — who may not have consented and may include clients or external partners.
Policy verdict: This category requires explicit policy treatment. Meeting transcription tools should only be used with verified enterprise agreements, and employees should be required to disclose to all participants when a recording bot is in the meeting. Client meetings, board meetings, and HR conversations should be specifically excluded unless explicitly approved for each case.
Notion AI / ClickUp AI / Similar productivity platform AI features
Tier 2 — depends on your existing Notion/ClickUp plan
What employees use it for: summarizing notes, generating content, organizing project data within their existing project management tool
What it does with your data: AI features embedded in productivity platforms like Notion or ClickUp operate on the data already stored in those platforms. The data protections depend on your existing contract with the platform. Enterprise plans typically include data processing agreements that cover AI features. If your organization uses these platforms' paid business tiers, the AI features are likely covered by existing data agreements. Personal or free accounts are not.
Policy verdict: Review whether your existing Notion or ClickUp contract covers AI features and what data commitments apply. If yes, Tier 1 or 2 depending on sensitivity. If employees are using personal accounts with company data, that's a broader data hygiene problem beyond just the AI features.
Perplexity AI
Free: Tier 2 · Pro with privacy settings: Tier 2
What employees use it for: research, fact-checking, summarizing information with source citations
What it does with your data: Perplexity is primarily a research tool — employees ask questions and get sourced summaries. The risk is lower than ChatGPT for most use cases because employees typically aren't pasting company documents into Perplexity — they're asking it questions. However, if employees start uploading documents or sharing detailed internal context to get better answers, the data risk increases substantially. Perplexity Pro includes some data privacy controls, but no enterprise-grade DPA for most users.
Policy verdict: Tier 2 for most users — acceptable for research and general queries using non-sensitive information. Include in your policy with explicit guidance that company documents should not be uploaded and that internal context (deal specifics, personnel details, unreleased product information) should not be shared.
Claude.ai (Anthropic)
Free: Tier 2 · Claude Teams/Enterprise: Tier 1
What employees use it for: drafting, analysis, coding, document summarization, research assistance
What it does with your data: Claude.ai free and Pro tiers: Anthropic may use conversations to train models by default (with opt-out available in settings). Claude Teams and Enterprise include commitments that conversations are not used for training and are handled under a data processing agreement. The free tier is meaningfully different from the enterprise tier in terms of data protections — the same distinction that applies to ChatGPT.
Policy verdict: Tier 2 for claude.ai free/Pro (personal accounts). Tier 1 for Claude Teams or Enterprise with a signed DPA. Same handling as ChatGPT — be explicit in your tier list about which version is which, because employees treat them as identical.
AI coding assistants (GitHub Copilot, Cursor, Codeium)
Requires explicit treatment for source code
What employees use it for: code completion, code generation, debugging, code review — used by engineers, data analysts, and increasingly non-technical staff building automations
What it does with your data: AI coding tools process code in real time as it's written — which means they have access to whatever codebase the developer is working in. For proprietary source code, this is a material IP exposure risk. GitHub Copilot Business and Enterprise include commitments that code is not retained or used for training. Free versions of these tools typically do not. The risk extends beyond professional engineers — any employee using a no-code or automation tool with AI assistance may be sharing internal system logic and data schema.
Policy verdict: AI coding tools should be addressed separately in your policy, not lumped into the general AI tool category. Proprietary source code, internal API documentation, and data schemas should never be shared with AI tools that lack data processing agreements. GitHub Copilot Business is Tier 1 for most engineering contexts. Free/personal coding AI tools should be Tier 3 for any work involving company repositories.

The question your tier list needs to answer

For every tool on your list, employees should be able to answer: "If I use this tool for [specific task], is that okay?" The tier designation (approved / limited / prohibited) is only half the answer. The data handling guidance is the other half — because an approved tool used with the wrong data creates the same risk as a prohibited tool.

A well-constructed tier list pairs the tool name with the specific data restrictions that apply: "ChatGPT Enterprise — Tier 1 — approved for use with internal company information excluding customer PII and financial data" is more useful than "ChatGPT Enterprise — Approved."

Get a customized tool tier list built for your company's actual tools.

Shadow AI Policy generates a tool tier list tailored to the AI tools your company uses — paired with your acceptable use policy, employee acknowledgment form, and manager FAQ.

Generate my tool tier list →

One principle that simplifies every tool decision

When you're not sure how to classify a tool, apply this test: does this vendor have a signed data processing agreement with your organization, and does that agreement include a commitment that inputs are not used for model training?

If yes to both: Tier 1 or Tier 2 depending on what data categories are explicitly covered.

If no: Tier 2 (non-sensitive data only) at best, Tier 3 if the tool handles anything where data exposure would create legal, regulatory, or client relationship risk.

This heuristic handles 90% of tool classification decisions without needing to read every vendor's terms from scratch.