Your AI acceptable use policy, ready in 10 minutes
75% of CISOs have already found unsanctioned AI tools running in their environment.
Shadow AI Policy generates a custom shadow AI policy, employee acknowledgment form,
approved tool list, and Manager FAQ — tailored to your industry and size.
Free preview · Full kit from $79 · 30-day money-back guarantee · Questions? info@shadowaipolicy.com
75%of CISOs found unsanctioned AI tools in their environment2026 CISO AI Risk Report
38%of employees share sensitive data with AI without permissionCybSafe / NCA, 2024
63%of companies have zero AI governance policy in placeIBM, 2025
$670kaverage extra cost per breach when shadow AI is involvedIBM Cost of a Data Breach, 2025
What you get
Four documents. One kit. Done.
📋
AI Acceptable Use Policy
Full policy document covering scope, data rules, approved uses, prohibited uses, incident reporting, and review schedule. Ready to add to your employee handbook.
🛡️
Shadow AI Tool Tier List
Every major AI tool classified as Approved, Limited Use, or Prohibited — pre-populated for your industry. Answers the question every HR manager faces: "Can we use ChatGPT?"
✍️
Employee Acknowledgment Form
One-page sign-off form confirming employees have read and understood the policy. Printable, or distribute via your existing e-signature tool (DocuSign, HelloSign, Adobe Sign).
❓
Manager's FAQ
Answers the 12 questions every team lead gets when a new AI policy drops. Hand it to every manager and stop answering the same questions all week.
How it works
Three steps. No IT team required.
1
Answer a few questions about your company
Industry, company size, which AI tools your team already uses, and what data you handle. Takes about 3 minutes.
2
Shadow AI Policy generates your tailored kit
Our AI analyses your profile against current shadow AI risks and generates documents specific to your industry, tools, and data obligations — not a generic template.
3
Download, customise, and distribute
Download all four documents as PDFs. Edit in-browser. Send the acknowledgment form to your team. Monitor plan subscribers get an alert when the AI landscape changes and their policy needs an update.
Free preview
Generate your AI policy kit
Step 1 of 3 — Your company
Tell us about your company
We'll tailor the policy to your industry, size, and risk profile.
Financial services
Healthcare
Legal / professional services
Technology / SaaS
Retail / e-commerce
Education
Manufacturing
Other
1–24
25–199
200–999
1,000+
HR manager
Legal / compliance
IT / security
CEO / founder
Operations
AI tools & data
We classify each tool as approved, limited, or prohibited based on your data profile.
ChatGPT
Microsoft Copilot
Claude
Gemini
GitHub Copilot
Grammarly
Otter / meeting AI
Notion AI
Midjourney / image gen
Perplexity
Not sure / need to audit
Customer PII
Confidential financial data
Protected health info (PHI)
Client legal files
Source code / IP
Payment card data (PCI)
Please enter a valid work email address to receive your policy kit.
Building your policy kit
Analysing company profile and industry risks
Classifying AI tools — approved, limited, prohibited
Drafting AI Acceptable Use Policy
Generating employee tool list
Writing employee acknowledgment form
Generating Manager FAQ
Finalising 4-document kit
Full kit ready — 4 documents tailored to financial services
This Artificial Intelligence Acceptable Use Policy ("Policy") establishes guidelines for the responsible use of AI tools by all employees, contractors, and third parties acting on behalf of Acme Corp ("the Company").
The rapid adoption of AI tools introduces both significant productivity opportunities and serious data security risks. Employees using AI tools to process confidential financial data, client PII, or internal communications can inadvertently expose that information to third-party AI providers whose data retention and training policies may conflict with the Company's obligations to clients and regulators.
Section 2 — Who this policy applies to
This Policy applies to all full-time and part-time employees, contractors and freelancers with access to Company systems or data, and any third party using AI tools while performing work for the Company. Compliance is mandatory. Violations may result in disciplinary action up to and including termination of employment.
Sections 3–8 included in full policy: Data classification rules · Tool approval process · Prohibited uses · Incident reporting · Review schedule · Sign-off procedure
Pre-populated for financial services. Editable after purchase.
Tool
Tier
Condition
Microsoft Copilot (M365)
Approved
Corporate account only
Grammarly Business
Approved
Business plan with DPA
ChatGPT Enterprise
Limited
No client data, no PII
Claude (claude.ai)
Limited
Internal use only
Otter.ai / meeting AI
Limited
Internal meetings only, all consent
+ 14 more tools
Locked
Unlock full list
ChatGPT (personal/free)
Prohibited
Inputs used for model training
Any AI that trains on inputs
Prohibited
Data retention risk
Distribute this to all staff when adopting the policy. Full kit includes e-signature integration.
I, the undersigned, confirm that I have read and understood the Acme Corp AI Acceptable Use Policy dated March 2026. I agree to:
1. Use only approved or pre-authorised AI tools for work tasks.
2. Not input client PII, financial data, or confidential information into any AI tool unless explicitly authorised in writing.
3. Review and verify all AI-generated outputs before using them in client-facing work.
4. Report any suspected breach of this policy to my manager and IT within 24 hours.
I understand that violations of this policy may result in disciplinary action up to and including termination.
Employee signature ___________
Date ___________
Manager signature ___________
Date ___________
Full kit includes a clean Word/PDF version ready to distribute via your e-signature tool of choice.
Can employees use ChatGPT to help write emails?
Only using the enterprise-licensed version with corporate credentials. Do not paste client names, account numbers, or any confidential financial data. Personal free-tier accounts are prohibited — inputs may be used to train the model, which would violate our data obligations.
Can employees use AI for meeting notes?
Only for internal meetings, with all participants notified before recording begins. Do not use AI meeting tools for client calls unless the client has given explicit written consent and the tool has been approved by IT.
+ 10 more questions included in full policy kit
Unlock to see all 12 answers, including: image generation, code assistants, client deliverables, BYOD devices, and escalation procedures.
This template is informational only and does not constitute legal advice. Review with qualified legal counsel before company-wide adoption. Law versions current as of March 2026.
Unlock your full policy kit
Cheaper than one hour of a lawyer's time. Delivered to your inbox.
Most popular
$149
/month · cancel anytime
✓ Full 4-document policy kit
✓ All 24 tools in tier list
✓ Editable in-browser
✓ PDF download, all docs
✓ E-signature integration
✓ Alert when AI landscape changes
✓ Quarterly policy refresh
✓ Regenerate any time
30-day money-back guarantee
$79
one-time purchase
✓ Full 4-document policy kit
✓ All 24 tools in tier list
✓ Editable in-browser
✓ PDF download, all docs
✓ E-signature integration
✗ No monitoring or refresh
✗ No change alerts
Not legal advice. Informational policy template only. Cancel subscription anytime. By purchasing you agree to our Terms of Service.
Why shadow AI is a real risk
The numbers every HR manager needs to know
223
The average enterprise experiences 223 data policy violations per month tied to AI usage — most of which IT teams never see.
Netskope Cloud and Threat Report, 2026
47%
Of employees who use AI tools at work do so through personal, unmanaged accounts — completely invisible to IT and security teams.
Netskope, 2026
67%
Companies that implement a clear AI policy with approved alternatives see 67% less shadow AI usage — because employees know the rules.
Second Talent, 2026
Why teams use Shadow AI Policy
The problem is already inside your company
Most companies don't discover they have a shadow AI problem until something goes wrong. A policy gives employees clear rules before that happens.
10 min
From zero to a complete policy
Answer a few questions about your company. Shadow AI Policy generates a tailored 4-document kit — not a generic template you have to rewrite from scratch.
4 docs
Everything HR needs in one place
Policy document, tool tier list, employee acknowledgment form, and manager FAQ — the complete rollout package, not just a policy to file away.
By industry
Not one-size-fits-all
A healthcare company handling PHI needs different AI rules than a SaaS startup. Your policy reflects your actual industry, tools, and data obligations.
No IT
Built for HR and Legal teams
Every alternative costs $20,000+ and requires a security team to implement. Shadow AI Policy is designed for the HR manager or legal lead who needs this done today.
Not legal advice — and that's stated clearly
Shadow AI Policy generates an informational template based on your inputs. We recommend reviewing it with your legal counsel before company-wide adoption. The policy is a starting point — the structure, language, and tool classifications are done for you.
✓
30-day money-back guarantee
If Shadow AI Policy doesn't generate a policy document useful to your company, email us within 30 days for a full refund. No questions asked.
Industry-specific policies
Your industry has specific AI risks
Shadow AI Policy generates different policies for different industries — because the rules for a fintech firm handling PCI data are not the same as a law firm handling client files.
A shadow AI policy is a workplace document that defines which AI tools employees may use, which require manager approval, and which are prohibited — along with clear rules for how AI-generated content may be used in client-facing and internal work. It exists to prevent unauthorised AI use (shadow AI) that can expose sensitive company data to third-party AI providers without oversight.
Does my company really need an AI acceptable use policy?
Almost certainly yes. Research from IBM in 2025 found that 63% of companies have no AI governance policy in place. Meanwhile, Netskope found that 47% of employees who use AI tools at work do so through personal accounts that bypass all corporate security controls. The risk is not theoretical — IBM's Cost of a Data Breach report found that shadow AI involvement adds an average of $670,000 to breach costs.
How is Shadow AI Policy different from a free AI policy template?
Free templates are static Word documents with placeholder text and generic rules. Shadow AI Policy generates a policy tailored to your actual situation — the specific AI tools your team uses are classified, your industry's data obligations shape the rules, and your company size influences the approval process. You also get four documents, not one — including the tool tier list and employee acknowledgment form that free templates never include.
Is this legal advice?
No. Shadow AI Policy generates an informational policy template based on your inputs. It is not legal advice and does not create an attorney-client relationship. We recommend having your legal counsel review the final document before company-wide adoption. Shadow AI Policy is a starting point that covers the essential structure and language — the hard work of knowing what to include is done for you.
What happens when AI tools change?
On the Monitor plan ($149/mo), Shadow AI Policy sends you an email alert when a significant AI tool or policy change affects your kit — for example, if a tool you've approved changes its data training policy, or a major new AI tool becomes widely used. You can refresh your policy kit with one click. One-time purchase customers receive the policy current as of their purchase date.
Your team is already using AI tools. Make sure they're doing it safely.
Generate your AI acceptable use policy in 10 minutes. Free preview, no account required.
From $79 one-time · $149/mo with monitoring · Cancel anytime