Your AI acceptable use policy, ready in 10 minutes
Most employees now use personal AI accounts at work, outside IT visibility — and 63% of breached organizations have no AI governance policy in place (IBM, 2025).
Shadow AI Policy generates a custom shadow AI policy, employee acknowledgment form,
approved tool list, and Manager FAQ — tailored to your industry and size.
Free preview · $79 one-time or $149/mo with monthly updates · 30-day money-back guarantee · info@shadowaipolicy.com
One of four documents you'll receive, rendered in-browser — downloadable as Word or PDF.
Full policy document covering scope, data rules, approved uses, prohibited uses, incident reporting, and review schedule. Ready to add to your employee handbook.
🛡️
Shadow AI Tool Tier List
Every major AI tool classified as Approved, Limited Use, or Prohibited — pre-populated for your industry. Answers the question every HR manager faces: "Can we use ChatGPT?"
✍️
Employee Acknowledgment Form
One-page sign-off form confirming employees have read and understood the policy. Printable, or distribute via your existing e-signature tool (DocuSign, HelloSign, Adobe Sign).
❓
Manager's FAQ
Answers the 12 questions every team lead gets when a new AI policy drops. Hand it to every manager and stop answering the same questions all week.
How it works
Three steps. No IT team required.
1
Answer a few questions about your company
Industry, company size, which AI tools your team already uses, and what data you handle. Takes about 3 minutes.
2
Shadow AI Policy generates your tailored kit
Our AI analyses your profile against current shadow AI risks and generates documents specific to your industry, tools, and data obligations — not a generic template.
3
Download, customise, and distribute
Download all four documents as PDFs. Edit in-browser. Send the acknowledgment form to your team. Monitor plan subscribers get an alert when the AI landscape changes and their policy needs an update.
Free preview
Generate your AI policy kit
Step 1 of 3 — Your company
Tell us about your company
We'll tailor the policy to your industry, size, and risk profile.
Financial services
Healthcare
Legal / professional services
Technology / SaaS
Retail / e-commerce
Education
Manufacturing
Other
1–24
25–199
200–999
1,000+
HR manager
Legal / compliance
IT / security
CEO / founder
Operations
AI tools & data
We classify each tool as approved, limited, or prohibited based on your data profile.
ChatGPT
Microsoft Copilot
Claude
Gemini
GitHub Copilot
Grammarly
Otter / meeting AI
Notion AI
Midjourney / image gen
Perplexity
Not sure / need to audit
Customer PII
Confidential financial data
Protected health info (PHI)
Client legal files
Source code / IP
Payment card data (PCI)
Please enter a valid work email address to receive your policy kit.
Building your policy kit
Analysing company profile and industry risks
Classifying AI tools — approved, limited, prohibited
Drafting AI Acceptable Use Policy
Generating employee tool list
Writing employee acknowledgment form
Generating Manager FAQ
Finalising 4-document kit
Full kit ready — 4 documents tailored to your industry
Your preview is readyFree preview — Sections 1 & 2
Section 1 — Purpose and scope
This Artificial Intelligence Acceptable Use Policy ("Policy") establishes guidelines for the responsible use of AI tools by all employees, contractors, and third parties acting on behalf of Acme Corp ("the Company").
The rapid adoption of AI tools introduces both significant productivity opportunities and serious data security risks. Employees using AI tools to process confidential financial data, client PII, or internal communications can inadvertently expose that information to third-party AI providers whose data retention and training policies may conflict with the Company's obligations to clients and regulators.
Section 2 — Who this policy applies to
This Policy applies to all full-time and part-time employees, contractors and freelancers with access to Company systems or data, and any third party using AI tools while performing work for the Company. Compliance is mandatory. Violations may result in disciplinary action up to and including termination of employment.
Sections 3–8 included in full policy: Data classification rules · Tool approval process · Prohibited uses · Incident reporting · Review schedule · Sign-off procedure
Pre-populated for financial services. Editable after purchase.
Tool
Tier
Condition
Microsoft Copilot (M365)
Approved
Corporate account only
Grammarly Business
Approved
Business plan with DPA
ChatGPT Enterprise
Limited
No client data, no PII
Claude (claude.ai)
Limited
Internal use only
Otter.ai / meeting AI
Limited
Internal meetings only, all consent
+ 14 more tools
Locked
Unlock full list
ChatGPT (personal/free)
Prohibited
Inputs used for model training
Any AI that trains on inputs
Prohibited
Data retention risk
Distribute this to all staff when adopting the policy. Full kit includes e-signature integration.
I, the undersigned, confirm that I have read and understood the Acme Corp AI Acceptable Use Policy dated March 2026. I agree to:
1. Use only approved or pre-authorised AI tools for work tasks.
2. Not input client PII, financial data, or confidential information into any AI tool unless explicitly authorised in writing.
3. Review and verify all AI-generated outputs before using them in client-facing work.
4. Report any suspected breach of this policy to my manager and IT within 24 hours.
I understand that violations of this policy may result in disciplinary action up to and including termination.
Employee signature ___________
Date ___________
Manager signature ___________
Date ___________
Full kit includes a clean Word/PDF version ready to distribute via your e-signature tool of choice.
Can employees use ChatGPT to help write emails?
Only using the enterprise-licensed version with corporate credentials. Do not paste client names, account numbers, or any confidential financial data. Personal free-tier accounts are prohibited — inputs may be used to train the model, which would violate our data obligations.
Can employees use AI for meeting notes?
Only for internal meetings, with all participants notified before recording begins. Do not use AI meeting tools for client calls unless the client has given explicit written consent and the tool has been approved by IT.
+ 10 more questions included in full policy kit
Unlock to see all 12 answers, including: image generation, code assistants, client deliverables, BYOD devices, and escalation procedures.
This template is informational only and does not constitute legal advice. Review with qualified legal counsel before company-wide adoption. Law versions current as of April 2026.
Unlock your full policy kit
Cheaper than one hour of a lawyer's time. Delivered to your inbox.
Most popular
$149
/month · cancel anytime
✓ Full 4-document policy kit
✓ All 24 tools in tier list
✓ Editable in-browser
✓ PDF download, all docs
✓ E-signature integration
✓ Alert when AI landscape changes
✓ Monthly policy refresh — automatically
✓ Updated as AI tools change
30-day money-back guarantee
$79
one-time purchase
✓ Full 4-document policy kit
✓ All 24 tools in tier list
✓ Editable in-browser
✓ PDF download, all docs
✓ E-signature integration
✗ No monitoring or refresh
✗ No change alerts
Not legal advice. Informational policy template only. Cancel subscription anytime. By purchasing you agree to our Terms of Service.
Why shadow AI is a real risk
The numbers every HR manager needs to know
223
The average enterprise experiences 223 data policy violations per month tied to AI usage — most of which IT teams never see.
Most companies don't discover they have a shadow AI problem until something goes wrong. A policy gives employees clear rules before that happens.
10 min
From zero to a complete policy
Answer a few questions about your company. Shadow AI Policy generates a tailored 4-document kit — not a generic template you have to rewrite from scratch.
4 docs
Everything HR needs in one place
Policy document, tool tier list, employee acknowledgment form, and manager FAQ — the complete rollout package, not just a policy to file away.
By industry
Not one-size-fits-all
A healthcare company handling PHI needs different AI rules than a SaaS startup. Your policy reflects your actual industry, tools, and data obligations.
No IT
Built for HR and Legal teams
Most alternatives are full GRC platforms or law-firm engagements that cost thousands and take weeks. Shadow AI Policy is designed for the HR manager or legal lead who needs a working policy this week.
Not legal advice — and that's stated clearly
Shadow AI Policy generates an informational template based on your inputs. We recommend reviewing it with your legal counsel before company-wide adoption. The policy is a starting point — the structure, language, and tool classifications are done for you.
✓
30-day money-back guarantee
If Shadow AI Policy doesn't generate a policy document useful to your company, email us within 30 days for a full refund. No questions asked.
Simple pricing
Two options. No hidden fees.
Generate a free preview first — no account required. Pay only if you want the complete kit.
Most popular
Monitor plan
$149/mo
cancel anytime
✓ Full 4-document policy kit
✓ 24+ tools classified in tier list
✓ Monthly policy refresh — automatically
✓ Alert when AI landscape changes
✓ PDF download · E-signature ready
✓ Updated as AI tools change
30-day money-back guarantee
One-time kit
$79 once
no subscription
✓ Full 4-document policy kit
✓ 24+ tools classified in tier list
✗ No monthly refresh
✗ No change alerts
✓ PDF download · E-signature ready
30-day money-back guarantee
Industry-specific policies
Your industry has specific AI risks
Shadow AI Policy generates different policies for different industries — because the rules for a fintech firm handling PCI data are not the same as a law firm handling client files.
A shadow AI policy is a workplace document that defines which AI tools employees may use, which require manager approval, and which are prohibited — along with clear rules for how AI-generated content may be used in client-facing and internal work. It exists to prevent unauthorised AI use (shadow AI) that can expose sensitive company data to third-party AI providers without oversight.
Does my company really need an AI acceptable use policy?
Almost certainly yes. Research from IBM in 2025 found that 63% of companies have no AI governance policy in place. Meanwhile, Netskope found that 47% of employees who use AI tools at work do so through personal accounts that bypass all corporate security controls. The risk is not theoretical — IBM's 2025 Cost of a Data Breach report found that widespread shadow AI adds an average of $670,000 to the cost of a breach.
How is Shadow AI Policy different from a free AI policy template?
Free templates are static Word documents with placeholder text and generic rules. Shadow AI Policy generates a policy tailored to your actual situation — the specific AI tools your team uses are classified, your industry's data obligations shape the rules, and your company size influences the approval process. You also get four documents, not one — including the tool tier list and employee acknowledgment form that free templates never include.
Is this legal advice?
No. Shadow AI Policy generates an informational policy template based on your inputs. It is not legal advice and does not create an attorney-client relationship. We recommend having your legal counsel review the final document before company-wide adoption. Shadow AI Policy is a starting point that covers the essential structure and language — the hard work of knowing what to include is done for you.
What happens when AI tools change?
On the Monitor plan ($149/mo), your policy kit is regenerated every month with the latest AI news for your industry and emailed to you automatically. We also send targeted alerts when a significant AI tool or policy change materially affects your kit — for example, a tool you've approved changing its data training policy. Alerts are curated, not continuous, so your inbox stays useful. One-time purchase customers receive the policy current as of their purchase date and can upgrade to Monitor anytime.
✅
Payment confirmed!
Your complete AI Policy Kit is being generated right now and will be in your inbox within 2 minutes. Check your email — including your spam folder.
What you're receiving
✓ AI Acceptable Use Policy — all 8 sections, tailored to your industry
✓ AI Tool Tier List — 24+ tools classified Approved / Limited / Prohibited
✓ Employee Acknowledgment Form — ready to print or e-sign
✓ Manager FAQ — 12 questions your team will actually ask
No fake testimonials, no inflated numbers. Here's exactly what backs the product.
30-day money-back guarantee
If the kit doesn't meet your needs for any reason, email us within 30 days. Full refund, no questions asked. Read our refund policy →
No tracking. No analytics.
This site uses no Google Analytics, no Meta pixels, no Hotjar, no session replay. Only the data you voluntarily enter into the generator is collected. Verify in our privacy policy →
Enterprise-grade processors
Payments by Stripe (PCI-DSS Level 1). AI generation by Anthropic's Claude API — per their commercial terms, your inputs never train their models.
You reply to a real person
Shadow AI Policy is operated by Simcha Fuchs, a solo founder. Email info@shadowaipolicy.com and a human responds within 2 business days.
Transparent terms
Wyoming governing law, clear liability limits, 30-day window to cancel subscriptions, no lock-in. Read the full terms →
Real citations, no fabrication
Every statistic on this site links to its primary source — IBM, Gartner, Netskope, Second Talent. Click any stat to read the underlying report yourself.
Shadow AI Policy is a small, independent product. We don't have investor pressure to pad numbers or fabricate testimonials. We'd rather be honest than impressive.
Your team is already using AI tools. Make sure they're doing it safely.
Generate your AI acceptable use policy in 10 minutes. Free preview, no account required.
From $79 one-time · $149/mo with monitoring · Cancel anytime