By the Shadow AI Policy team
The week of April 23, 2026, is shaping up as one of the most consequential in recent AI governance history — with a major EU regulatory move against ChatGPT, Elon Musk's xAI filing a federal lawsuit to block a landmark U.S. state AI law, and an August deadline for the EU AI Act's core obligations now less than 100 days away.
This briefing covers four developments HR, legal, and compliance teams at small-to-midsize companies need on their radar right now: the EU's impending DSA designation of ChatGPT as a "very large" platform; xAI's constitutional challenge to Colorado's AI Act (SB 24-205) days before its June 30 effective date; the EU AI Act's looming August 2, 2026, obligations deadline; and the White House National AI Policy Framework's push to preempt state AI laws — and what it actually means (and doesn't mean) for your compliance obligations today.
The Colorado AI Act (SB 24-205) goes into effect June 30, 2026, regardless of xAI's lawsuit — your compliance obligations stand until a court says otherwise. If your company uses AI in hiring, performance management, or any other "consequential decision" affecting people in Colorado, audit those tools now and document your risk management approach. At the federal level, nothing has actually changed yet — state AI laws are still in force.
What happened: The European Commission is apparently preparing to designate ChatGPT as a "very large online search engine" under its landmark Digital Services Act (DSA). German newspaper Handelsblatt, citing sources within the Commission, reported this week that the classification could be announced within days, subjecting OpenAI to some of the EU's toughest digital regulations. Commission spokesman Thomas Regnier confirmed the review, stating: "OpenAI has published user numbers for ChatGPT above the 45 million DSA threshold for designation. The Commission services are currently assessing this information."
Why the numbers matter: Under the DSA, services with more than 45 million monthly active users in the EU can be designated as either very large online platforms (VLOPs) or very large online search engines (VLOSEs) — labels that define strict obligations aimed at limiting systemic risks to users and society. Data published by OpenAI indicates that ChatGPT's search functionality reached more than 120 million monthly users in the EU in the six months to September 2025, well above the threshold.
What designation would actually require: OpenAI already adheres to certain DSA rules, but a VLOSE designation would expand its obligations considerably, potentially requiring changes to how ChatGPT is designed and how its systems manage risk. If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks. Critically, once the Commission designates a platform or search engine under the DSA, the company has four months to meet the new requirements, including setting up clear contact channels for both regulators and users, reporting suspected criminal activity, providing accessible terms of service, and ensuring transparency around recommendation algorithms.
Why it matters to your company: If your employees are using ChatGPT to process client data, HR information, or confidential business content, a DSA designation changes the regulatory environment around that tool. Legal scholars say the decision could set a precedent for generative AI regulation across the EU. "Classifying ChatGPT as a VLOSE will expand scrutiny beyond what's currently covered under the AI Act," said Natali Helberger, professor of information law at the University of Amsterdam. Read the full coverage from Computing.co.uk (April 22, 2026).
What happened: Colorado's SB 24-205, signed by Governor Polis on May 17, 2024, requires developers and deployers of "high-risk" AI systems to use reasonable care to protect consumers from "algorithmic discrimination" and imposes related disclosure, documentation, and impact-assessment obligations in connection with "consequential decisions" affecting education, employment, financial services, health care, housing, insurance, and legal services. This week, Elon Musk's AI company, xAI, filed a federal lawsuit in Denver seeking to block the law before it takes effect. Enforcement authority is vested exclusively in the Colorado Attorney General, with civil penalties of up to $20,000 per violation. The original February 1, 2026 effective date was deferred to June 30, 2026 following Colorado's August 2025 special legislative session, and a working group convened by Governor Polis published a proposed amendment to the law on March 17, 2026 — though that amendment has not yet been introduced as legislation.
The legal theories: xAI alleges that SB 24-205 violates the First Amendment by compelling it, as a "developer," to alter Grok's training, fine-tuning, system prompts, and outputs to conform to Colorado's preferred positions on contested subjects, and by separately compelling content- and viewpoint-based disclosures regarding bias-mitigation practices. xAI also argues that the law violates the Dormant Commerce Clause by reaching development and deployment activity occurring entirely outside Colorado, noting that xAI is organized under Nevada law and headquartered in Palo Alto, California, with no Colorado offices.
What about the proposed rewrite? On March 17, 2026, the Colorado AI Policy Work Group, with strong support from Governor Jared Polis, proposed a new AI legal framework to replace the Colorado AI Act. The proposed framework, entitled "Concerning the Use of Automated Decision Making Technology in Consequential Decisions," modifies the obligations to focus more on transparency, recordkeeping, and consumer rights — rather than requirements such as reporting algorithmic discrimination, implementing a risk management policy, and conducting AI impact assessments. It more closely mirrors automated decision-making technology requirements seen under data privacy laws, rather than the AI governance requirements seen under the EU AI Act. However, time is ticking — the legislature closes on May 13.
What this means right now: A pending lawsuit does not stay a law. SB 24-205's requirements take effect on June 30, 2026. Between now and then, stakeholders should monitor Colorado Attorney General rulemaking and guidance, as well as additional legislative proposals that may either refine or overhaul the law's definitions, safe harbors, or implementation mechanics. If your business deploys AI systems that touch employment, lending, housing, or health decisions for people in Colorado, your obligations are live unless a court grants an injunction. Track the xAI case via Baker Botts (April 22, 2026). To build or update your internal AI policy now, generate a tailored AI policy kit.
What's happening: The EU AI Act entered into force on August 1, 2024, and will be fully applicable on August 2, 2026 — with prohibited AI practices and AI literacy obligations already in application since February 2, 2025, and governance rules and obligations for general-purpose AI models applicable since August 2, 2025. That means the main body of obligations is now less than 100 days away.
What applies on August 2: As currently in force, the AI Act specifies that the main body of obligations — including Annex III high-risk obligations, Article 26 deployer duties, Article 49 registration, and Article 50 transparency — applies from August 2, 2026. Most remaining provisions of the AI Act become applicable on that date. Transparency rules — like labelling AI-generated content — also start on August 2. Critically for HR teams: if your business uses AI to screen, rank, or match candidates, the EU now regulates those tools as high-risk systems. That means new obligations are in effect and require attention now.
The "Digital Omnibus" wild card: A legislative proposal has been adopted and the European Parliament and the Council of the EU are now discussing and negotiating the Digital Omnibus on AI. Will the August 2, 2026 deadline actually be postponed? The Digital Omnibus on AI in trilogue proposes fixed dates of December 2, 2027 for Annex III standalone high-risk systems and August 2, 2028 for Annex I. The Cypriot Presidency wants final agreement before August 2, 2026. If adopted, those dates become binding. If not adopted, the original August 2, 2026 date applies.
Enforcement readiness is uneven: Designation progress for national market surveillance authorities as of April 2026 is uneven — France has tasked ANSSI with some AI Act competences, Spain has designated AESIA, and Ireland has not yet designated a single market surveillance authority. As of March 2026, only eight of 27 Member States had designated their single points of contact. That unevenness doesn't reduce your legal exposure — it means you may not know who's watching first. See the full EU AI Act implementation tracker at the European Commission.
What happened: On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence. This Framework contains a sweeping set of legislative recommendations intended to establish a coherent, nationally unified approach to AI governance. While the Framework does not itself create binding legal obligations, it is likely to shape federal AI legislation in the months and years ahead.
The preemption push: The Framework's most consequential section for the current regulatory landscape is its recommendation for federal preemption of state AI laws. The administration recommends that Congress preempt state AI laws that "impose undue burdens." Several states have already taken action to regulate AI development and deployment — including Colorado's AI Act, set to take effect later in 2026, and California's amendments to the California Consumer Privacy Act regulating automated decision-making technologies. The Framework's interaction with these laws will depend heavily on how Congress translates the Administration's recommendations into legislation and how broadly any preemption provision is drawn. If broad preemption language is adopted, these and similar statutes could be rendered unenforceable.
What hasn't changed yet: There is no current comprehensive federal AI statute governing the use of AI in the employment context in the U.S. The White House framework does not impose new obligations on employers, nor does it include draft legislation or an executive order directing federal agencies. Instead, it sets out legislative recommendations for Congress. Unless and until Congress enacts federal legislation with preemptive effect, state and local AI laws remain in force. This is a critical point: the Framework is a policy signal, not a compliance shield. Businesses should continue to closely monitor both state and federal legislative developments moving forward. Read the full Consumer Finance Monitor analysis from Ballard Spahr (April 8, 2026).
The state AI wave isn't waiting for Washington: State lawmakers have introduced over 600 AI bills with requirements for private entities in the 2026 legislative sessions so far. Indiana, Utah, and Washington enacted new laws regulating the use of AI by health insurers to evaluate claims, prohibiting health insurers from using AI as the sole basis for denying or modifying claims. Compliance teams in healthcare, insurance, and HR functions need to track these state-by-state developments on their own — the federal government has not created a single authoritative clearinghouse.
About Shadow AI Policy: We build AI acceptable use policy tools for HR and operations teams at 50–500 person companies. We publish guides on shadow AI, acceptable use policies, and AI governance, updated as regulations and AI tools change.
Three concrete exposures this week. First, if you have EU users and employees using ChatGPT for work, the pending DSA designation signals stricter transparency and risk obligations for that tool are likely coming within months — update your AI tool inventory to reflect this. Second, if your operations touch Colorado residents through employment, lending, or health decisions, the Colorado AI Act (SB 24-205) is still law as of June 30 regardless of xAI's lawsuit — document your AI use now. Third, if you use any AI in hiring or employee evaluation anywhere in the EU, the EU AI Act's August 2, 2026, deadline for high-risk system obligations is under 100 days away.
Yes, specifically on two fronts. If you have employees who use ChatGPT for work purposes and you have EU market exposure, flag the DSA designation development in your next compliance review — a designation would impose new obligations on how OpenAI runs the tool, which can affect the data-handling terms you rely on. If you operate in Colorado or use AI in HR decisions in any U.S. state, confirm whether your tools meet the notice, disclosure, and human-review requirements that are already live in California, Illinois, and New York — and that will be live in Colorado by June 30, 2026.
No. The National Policy Framework released on March 20, 2026, is a set of legislative recommendations to Congress — it is not a law, an executive order, or a preemption ruling. State AI laws including the Colorado AI Act, California's FEHA amendments, and Illinois' HB 3773 remain fully in force today. A Trump executive order and DOJ litigation task force exist to challenge state laws, but no court has enjoined any of these major employment AI statutes yet. The only safe compliance position right now is to treat existing state laws as binding until a specific court order or enacted federal statute says otherwise.
Tailored to your industry and the AI tools your team uses. Free preview, $79 one-time or $149/mo with monthly updates.
Generate my policy kit →