By the Shadow AI Policy team
**Financial advisors using AI tools without a written policy aren't just taking an operational risk — they're taking a regulatory one, and the SEC and FINRA have made clear they're paying attention.** This post covers the specific AI governance obligations that apply to registered investment advisers and broker-dealers: what the SEC Marketing Rule requires for AI-generated client communications, how FINRA expects firms to supervise AI use, which prohibitions apply to client account data in AI tools, and what record retention obligations attach to AI-generated materials.The single biggest mistake financial services firms make with AI is treating it as a productivity tool rather than a supervised communication channel. Every AI-generated output that touches a client — a summary, a recommendation, a draft email — is subject to the same supervision and recordkeeping rules as anything a human advisor writes. Build your policy around that principle first.
The SEC and FINRA haven't issued a single unified "AI rule," but that doesn't mean the regulatory framework is silent. Existing rules — the Marketing Rule, the Books and Records rules, supervision requirements — all apply to AI-generated content and AI-assisted workflows. Regulators have said so explicitly in examination priorities and guidance documents. The absence of AI-specific rulemaking isn't a green light; it's a gap your firm is expected to fill with internal policy.
FINRA flagged AI as a key examination priority in its 2024 Annual Regulatory Oversight Report, noting that firms are expected to have supervisory procedures in place before deploying AI tools — not after. The SEC's Office of Compliance Inspections and Examinations (now OCIE) has similarly signaled that AI use in client-facing roles is a priority area. If examiners walk into your firm and ask for your AI policy, "we don't have one" is a compliance failure, full stop.
For context on how shadow AI creates organizational risk more broadly, see our guide on what shadow AI is and why it matters. The dynamics described there apply directly to the financial services environment, where unsanctioned tools can touch regulated data before anyone in compliance knows they exist.
The SEC's Marketing Rule — formally Investment Advisers Act Rule 206(4)-1, which took effect November 4, 2022 — governs all advertisements and client communications from registered investment advisers. It applies to AI-generated content the same way it applies to a brochure a human copywriter drafted. The rule prohibits untrue statements of material fact, materially misleading implications, and unsubstantiated performance claims, regardless of how the content was produced.
This creates a concrete problem for firms using general-purpose AI tools like ChatGPT, Claude, or Microsoft Copilot to draft client-facing materials. These tools can generate confident-sounding claims about investment strategies, market conditions, or hypothetical returns that are factually wrong or misleading. Under Rule 206(4)-1, the firm — not the AI — is responsible for every word in that output before it reaches a client. Your AI policy needs to require human review and sign-off on any AI-generated content before it's sent to clients or published.
The Marketing Rule also specifically addresses testimonials and endorsements (Rules 206(4)-1(b) and (c)). If a firm uses AI to generate or amplify social proof — synthesized reviews, AI-written client success narratives — that content must comply with the same disclosure and oversight requirements as human-generated testimonials. Don't let AI output blur the line between a compliance-reviewed statement and a fabricated endorsement.
FINRA's framework for AI use flows primarily from its existing supervision rules — FINRA Rule 3110 (Supervision) and FINRA Rule 3120 (Supervisory Control System) — rather than AI-specific rulemaking. The core obligation: firms must establish and maintain a supervisory system that covers all communications and activities of associated persons, including those assisted or generated by AI tools.
In practice, that means if a registered representative uses an AI tool to draft a client email, analyze a portfolio, or generate a market commentary, that output must flow through the same review and approval process as anything the rep wrote manually. FINRA has been explicit that technology doesn't change the supervision obligation — it shifts where in the workflow the supervision needs to happen.
FINRA has also raised concerns about AI-generated communications that could constitute recommendations under Regulation Best Interest (Reg BI). If an AI tool produces personalized output that a reasonable client could interpret as investment advice, the firm may have a Reg BI obligation — including the requirement to act in the client's best interest and document the basis for the recommendation. Your policy should require employees to flag any AI output that includes anything resembling a product suggestion or portfolio action for compliance review before delivery.
The question regulators are asking isn't "did a human or an AI write this?" It's "did your firm have a reasonable supervisory system in place, and did it catch problems before they reached clients?" Build your policy to answer that second question.
This is the highest-risk area for most firms, and the one where policy gaps are most common. Many employees don't think twice about pasting client information into an AI tool to summarize a portfolio or draft a personalized email — but doing so with a non-enterprise AI tool almost certainly violates your firm's data obligations and may violate the client's privacy rights under SEC Regulation S-P (17 CFR Part 248), which governs the safeguarding of customer financial information.
Reg S-P requires broker-dealers, investment advisers, and other covered institutions to have written policies and procedures to protect customer records and information. Uploading client account data to a consumer AI tool that uses inputs for model training — or stores them on third-party servers without a data processing agreement — is exactly the kind of unauthorized disclosure Reg S-P is designed to prevent. The same analysis applies to client names, account numbers, Social Security numbers, holdings, and transaction history.
Your AI policy should include a clear data classification table specifying which data types can and cannot be used with which AI tool tiers. Here's a practical starting point:
| Data Type | Consumer AI Tools (ChatGPT free, Claude.ai free) |
Enterprise AI Tools (with DPA, no training on data) |
Firm-Hosted / On-Prem AI |
|---|---|---|---|
| Client name + account number | ❌ Prohibited | ⚠️ Review required | ✅ Permitted with controls |
| Portfolio holdings / transaction history | ❌ Prohibited | ⚠️ Review required | ✅ Permitted with controls |
| Anonymized / aggregated market data | ✅ Permitted | ✅ Permitted | ✅ Permitted |
| Internal research / non-client IP | ⚠️ Review required | ✅ Permitted with controls | ✅ Permitted with controls |
| SSN / date of birth / tax ID | ❌ Prohibited | ❌ Prohibited | ⚠️ Strict controls required |
An "enterprise" AI tool isn't automatically safe just because it has a business subscription. Verify that your vendor has signed a data processing agreement (DPA), that the contract explicitly prohibits using your firm's data for model training, and that the tool is covered by the vendor's security compliance certifications (SOC 2 Type II at minimum). If you can't confirm those three things, treat the tool as consumer-tier for data classification purposes.
Under FINRA Rule 3110, firms must designate supervisors and establish written procedures for reviewing and approving employee communications. That obligation doesn't change when AI generates the content — but the workflow often does, and most firms haven't updated their Written Supervisory Procedures (WSPs) to reflect it. That's a gap examiners are looking for.
Your WSPs should address at minimum:
Supervision failures are among the most common findings in FINRA examinations. Firms that haven't updated their WSPs to mention AI at all are effectively telling examiners they haven't thought about the problem. Update your WSPs before your next examination cycle. If you're building a policy from scratch, our AI acceptable use policy template guide covers the core components you'll need to adapt for your WSPs.
Both the SEC and FINRA impose strict record retention requirements that apply directly to AI-generated content. Under SEC Rule 17a-4 (for broker-dealers) and Rule 204-2 under the Investment Advisers Act (for RIAs), firms must retain business-related communications — including written communications with clients and records of recommendations — for defined periods (generally three to six years depending on the record type).
If an AI tool generates a client email, a portfolio summary, a market commentary, or anything else that constitutes a "business communication," that output is a record your firm is required to preserve and make available to regulators on request. The medium doesn't matter. The fact that the content was AI-generated doesn't exempt it.
The practical implication: if your employees are using AI tools that don't produce an auditable record of their outputs, you have a retention problem. Consumer AI tools typically don't generate logs that meet SEC or FINRA retention standards. Your policy should require that any AI-generated content used in a business context be captured in your firm's existing records management system — whether that's your email archive, your CRM, or your document management platform — before or at the time of use. Don't rely on the AI tool's chat history as your retention mechanism.
If you want a comprehensive starting point for building out the policy infrastructure behind these requirements, you can generate a tailored policy kit that covers data classification, supervision, and record retention in a format you can adapt for your firm's WSPs and compliance manual.
About Shadow AI Policy: We build AI acceptable use policy tools for HR and operations teams at 50–500 person companies. We publish guides on shadow AI, acceptable use policies, and AI governance, updated as regulations and AI tools change.
Yes, the rule applies to the final output, not to who or what produced the first draft. If a human edits AI-generated content and then sends it to a client, the firm is still responsible for ensuring that final output complies with Rule 206(4)-1. The review and editing process is exactly the supervisory control the rule expects — but the obligation doesn't disappear just because a human touched it last.
FINRA Rule 2210 defines communications broadly — correspondence (one-on-one), retail communications (to more than 25 retail investors), and institutional communications all count. If an AI tool drafts or substantially contributes to any of these, the same review and approval requirements apply as for manually drafted content. There's no AI exemption in the rule text, and FINRA hasn't signaled any intent to create one.
Possibly, but not automatically. You need to verify three things before using any enterprise AI tool with client data: (1) the vendor has signed a data processing agreement with your firm, (2) the contract explicitly prohibits using your firm's inputs for model training, and (3) the tool meets your firm's security standards (SOC 2 Type II at minimum). OpenAI's Enterprise terms and Microsoft's data processing addendum both address these points, but you need to confirm the specific terms in your agreement — defaults in consumer or small business tiers often don't include the same protections.
For broker-dealers, SEC Rule 17a-4 generally requires retention of customer-related communications for three years, with the first two years in an easily accessible location. For RIAs, Rule 204-2 under the Investment Advisers Act generally requires five years for most records related to client recommendations and communications. State requirements may add to these minimums. The key point: AI-generated content that constitutes a business communication is subject to these requirements just like anything a human wrote, and you can't rely on an AI tool's built-in chat history to satisfy them.
Tailored to your industry and the AI tools your team uses. Free preview, $79 one-time or $149/mo with monthly updates.
Generate my policy kit →