Financial services firms handle some of the most sensitive data categories in any industry — client account information, non-public financial data, regulated investment advice, and personally identifiable information subject to multiple overlapping privacy frameworks. When employees use AI tools without governance, each of those categories becomes an exposure. Here's what a financial services AI policy needs to cover.
The financial services industry is not waiting for federal AI legislation to start paying attention to how firms govern AI use. The SEC, FINRA, state banking regulators, and insurance commissioners are all actively developing expectations — and exam teams are beginning to ask questions about AI governance during routine examinations.
This isn't hypothetical future risk. It's the current regulatory environment. And it extends well beyond the largest firms. RIAs, broker-dealers, credit unions, community banks, insurance agencies, and financial planning practices at every scale face these questions.
The SEC has published guidance and brought enforcement actions related to AI use in investment advisory contexts — particularly around "AI washing" (claiming AI capabilities that don't exist) and around suitability and fiduciary considerations when AI tools influence investment recommendations. Exam priorities for 2026 explicitly include AI governance and controls.
FINRA has published guidance on the use of AI in broker-dealer operations, including expectations for supervision of AI tools used by registered representatives. The guidance emphasizes that firms are responsible for AI-generated content communicated to customers, and that supervisory obligations apply to AI-assisted communications the same as any other.
Multiple states — including New York, California, and Colorado — have enacted or proposed AI-specific rules affecting financial services firms operating in those states. Texas enacted AI-related employer obligations effective January 2026. The patchwork is growing and requires monitoring.
GLBA's Safeguards Rule requires financial institutions to protect customer financial information. The FTC updated the Safeguards Rule to explicitly address vendor and service provider risk — which applies to AI vendors. Sharing non-public customer financial information with AI tools that lack appropriate data processing agreements may create Safeguards Rule exposure.
Financial services firms handle data categories that require specific treatment in an AI policy — not just the general "confidential information" language that works for most industries:
Material non-public information must never enter any AI tool regardless of tier designation. This needs to be a named, standalone rule — not implied by a general "confidential information" prohibition. The consequences of MNPI exposure through an AI channel are severe enough to warrant explicit, emphatic treatment. This rule applies to all employees, not just those in investment advisory roles.
If AI tools are used to draft, assist, or generate communications to customers — including email, letters, or social media content — those communications are subject to the same supervision and recordkeeping requirements as any other customer communication. The policy should require that AI-assisted customer communications be reviewed and approved before sending by a person with appropriate supervisory authority. This isn't a new obligation — it's an existing one that needs to be applied explicitly to the AI context.
AI tools should not generate investment recommendations, suitability assessments, or portfolio guidance without explicit human review and approval by a licensed professional. Even when AI is used as a research or drafting aid — not as the decision-maker — the output should be treated as a draft requiring professional review before it influences client advice in any form.
GLBA's Safeguards Rule requires financial institutions to oversee service providers who access customer information. AI vendors that receive NPI are service providers under this framework — which means your firm needs to conduct due diligence on their security practices, document that due diligence, and have contractual protections in place. This requirement applies whether the AI tool is a standalone product or an AI feature embedded in existing software your firm already uses.
Many regulated financial activities have specific recordkeeping requirements. If AI tools are used in those activities — generating analysis that informs investment decisions, producing communications to customers, or creating documentation of advisory processes — the records of that AI use may need to be retained and producible in the same way as other business records. Your policy should address whether and how AI-assisted work in regulated activities will be documented and retained.
Regulatory examiners asking about AI governance in financial services firms are focusing on a core set of questions. Being prepared to answer them — with documentation — is the practical goal of your AI governance program:
A written policy with documented employee acknowledgment answers the first two directly. The others require process and operational controls — but they all start with the policy.
"Regulatory exam priorities for 2026 explicitly include AI governance and controls for registered investment advisers and broker-dealers." — SEC Examination Priorities, 2026
Shadow AI Policy generates a tailored AI acceptable use policy with financial services-specific data handling rules — including NPI, MNPI, and customer communication guidance — plus a tool tier list, acknowledgment form, and manager FAQ.
Generate my financial services policy →