By the Shadow AI Policy team
The week of May 5, 2026 was one of the most consequential in AI governance in recent memory. In Washington, the Trump administration signaled a striking about-face on AI oversight. In Brussels, a high-stakes negotiation over the EU AI Act's reform package collapsed — and then was rescheduled. In Denver, a federal court froze the nation's most ambitious state AI employment law. And fresh research confirmed what HR teams already sense: shadow AI is now endemic, with up to one-third of workers using unapproved tools outside IT's line of sight.
This week's briefing covers four stories: (1) the White House weighing a formal pre-release vetting regime for frontier AI models, prompted by Anthropic's Mythos model; (2) the DOJ intervening in a federal lawsuit to block Colorado's AI anti-discrimination law, SB 24-205, followed by a court stay; (3) the EU AI Act "Omnibus" reform trilogue collapsing on April 28, throwing the August 2, 2026 high-risk deadline into limbo; and (4) new Lenovo survey data showing that up to one-third of employees are using AI entirely outside IT governance structures.
The Colorado AI Act stay is the most immediately actionable story for U.S. HR and compliance teams: the law is paused, but every underlying obligation — bias risk assessments, AI disclosure to employees, impact documentation — still maps directly to federal anti-discrimination law under Title VII and the ADA, which no court has stayed. Don't use the Colorado pause as a reason to slow down your AI governance work; use it as a window to build the documentation that would defend you under any version of the law.
On May 5–6, 2026, the Trump administration publicly confirmed it is studying a potential executive order that would establish a formal vetting process for new AI models before public release. National Economic Council Director Kevin Hassett described the idea to Fox Business as creating "a clear roadmap to everybody about how this is going to go and how future AIs that also potentially create vulnerabilities should go through a process so that they're released to the wild after they've been proven safe, just like an FDA drug." The Bloomberg report from May 6 and CSO Online's detailed analysis both confirm the discussions are active, though no order has been signed.
The immediate trigger is Anthropic's Mythos model. The discussion follows Anthropic's recent introduction of Mythos, a model the company has described as representing a watershed moment for cybersecurity. Anthropic has said Mythos Preview has found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Anthropic has limited the release of the Mythos model to a handful of partner companies. Separately, the Center for AI Standards and Innovation (CAISI) announced agreements on May 5 with Google DeepMind, Microsoft, and xAI to "conduct pre-deployment evaluations and targeted research" of frontier AI models, expanding a program that already covered Anthropic and OpenAI.
This matters for HR and compliance teams even though no executive order is final yet. The proposed order would create an "AI working group" of tech executives and government officials to develop oversight procedures. These discussions, if true, would represent a sharp departure from the administration's current stance as something of a deregulatory champion — immediately upon taking office, the Trump administration revoked a Biden-era executive order addressing AI risks. Hassett said it's "really quite likely" that any testing spelled out under the order would ultimately extend to all AI companies. That means the enterprise tools your teams use today — including tools from Anthropic, Google, and OpenAI — could be subject to government security disclosures before their next major updates ship. Track vendor communications from these providers closely over the coming weeks.
This is the most HR-specific story of the week. On April 27, 2026, a federal court paused enforcement of Colorado's Artificial Intelligence Act (SB 24-205), placing one of the country's most comprehensive state AI laws on hold while lawmakers reconsider its timing and scope. The order prevents the state from initiating enforcement actions during the pendency of the litigation, effectively freezing the law just weeks before its anticipated June 30, 2026 effective date. Read the DOJ's official press release and the HR Dive summary for the procedural detail.
How did we get here? On April 24, 2026, the Department of Justice intervened in a lawsuit filed by xAI, a company owned by Elon Musk, challenging the Colorado Artificial Intelligence Act. The DOJ filed a Complaint in Intervention pursuant to the Civil Rights Act of 1964, 42 U.S.C. § 2000h-2, after the Acting Attorney General certified that the case is "of general public importance." The DOJ's core argument: the Colorado law violates the Equal Protection Clause of the Fourteenth Amendment by requiring AI companies to prevent unintentional disparate impact that their products could have based on protected characteristics like race and sex, and by exempting liability for certain forms of discrimination designed to advance "diversity."
Here is the critical nuance for HR leaders: this development is neither a repeal nor a permanent delay. It leaves employers in a familiar position — navigating a period of legal uncertainty while continuing to operate against a rapidly evolving regulatory backdrop. Importantly, even if the Colorado law is ultimately blocked or significantly revised, employers should not view the pause as a signal to deprioritize AI governance. Plaintiffs have brought claims challenging the use of AI tools under existing legal frameworks, including federal and state anti-discrimination laws and statutes such as the Fair Credit Reporting Act. Regulators have likewise made clear that long-standing civil rights laws — including Title VII, the ADA, and analogous state statutes — apply fully to AI-driven employment decisions. If successful, this theory could have significant implications for AI regulation beyond Colorado, potentially limiting the ability of states and even federal agencies to require demographic balancing in AI systems. If you use AI in hiring, performance management, or compensation decisions, now is the time to document what those tools do and how you have reviewed them for bias — regardless of what happens to SB 24-205.
To understand where your current AI tools sit relative to these obligations, generate a tailored AI policy kit that maps your use cases to the relevant legal requirements.
European compliance teams had been watching trilogue negotiations on the EU AI Act "Digital Omnibus" reform package — a proposal from the European Commission that would have delayed the August 2, 2026 high-risk AI enforcement deadline to December 2, 2027. Those negotiations fell apart after 12 hours of negotiations in Brussels on April 28. The IAPP published the clearest summary of what this means in practice.
The breakdown centered on a specific jurisdictional fight: the session ultimately broke down over the increasing intersectionality of existing digital rules. As the enforcement deadline nears, questions are mounting over whether AI systems embedded in products already governed by existing EU sectoral safety legislation — such as medical devices, industrial machinery, toys and connected cars — should be exempted from the AI Act's additional requirements or governed solely by sectoral rules.
What does this mean for your compliance calendar right now? The AI Act was already passed into law and enforcement of high-risk systems starting August 2, 2026 still stands. It is also important to recognize there are many aspects of the act that were not included in the Omnibus, which are important for AI governance professionals to work on now. For example, getting prepared for Article 50. The act's transparency requirements for new generative AI systems — including user-facing disclosure and machine-readable marking of AI-generated content — still apply from August 2. A follow-up trilogue is expected in mid-May 2026. Deliberations are not over. There is a potentially clear path toward a simplified agreement leading to the next trilogue in mid-May. But as of today, every company deploying high-risk AI systems in the EU — including HR tools used in recruitment, performance evaluation, or workforce management — must treat August 2, 2026 as a live deadline.
The stakes are real. Fines depend on the severity of non-compliance: violations of prohibited AI practices can reach up to €35 million or 7% of global turnover; breaches of high-risk AI requirements can reach up to €15 million or 3% of global turnover. The table below summarizes the current EU AI Act enforcement timeline as it stands after the Omnibus breakdown:
| Deadline | What Applies | Status After Omnibus Collapse |
|---|---|---|
| Feb 2, 2025 (passed) | Prohibited AI practices banned; AI literacy obligations begin | In force — no change |
| Aug 2, 2025 (passed) | GPAI model obligations apply; governance rules in effect | In force — no change |
| Aug 2, 2026 | High-risk AI systems (standalone); transparency rules (Article 50); GPAI enforcement powers activate | Still live — Omnibus delay proposal unresolved |
| Aug 2, 2027 | Remaining provisions; high-risk AI embedded in regulated products | Omnibus proposed extending this; outcome uncertain |
| Dec 2, 2027 / Aug 2, 2028 | Omnibus-proposed long-stop dates for standalone and product-embedded high-risk systems | Proposed but not adopted — still under negotiation |
A major new survey provides the clearest recent picture of shadow AI's scale. More than 70% of employees are using artificial intelligence tools every week, and up to one-third are doing so without IT oversight. The findings come from Lenovo's Work Reborn Research Series 2026, which surveyed 6,000 full-time employees at organizations with at least 1,000 workers across the US, Canada, the UK, France, Germany, India, Japan, Singapore, Brazil, Mexico, Australia and New Zealand in December 2025 and January 2026.
The training gap is equally stark. About 31% of AI users said their employer does not provide training on how to use AI at work, while 22% said their employer does not provide AI tools for workplace use. Between one-fifth and one-third of workers are using AI outside the oversight and governance of IT teams. The report said this has created a two-tier workforce in which some employees have access to approved tools and oversight while others rely on public platforms or unauthorized systems to maintain productivity. That divide can delay return on investment, create duplicate spending on overlapping tools, increase security risks, and make it harder for organizations to determine which AI initiatives should be expanded companywide.
The legal exposure from this gap is not theoretical. A Foley & Lardner analysis published in April 2026 cited a National Cybersecurity Alliance survey finding that 43% of AI users admitted to sharing sensitive company information with AI tools without their employer's knowledge, underscoring that shadow AI use is not theoretical — it is already happening, often outside company systems and platforms, and therefore not in view of legal and compliance teams. Shadow AI in meeting transcription tools carries specific extra risk: key risk areas include state law recording-consent requirements, privilege and confidentiality obligations, governance controls, and record retention practices, all of which can lead to serious downstream consequences if not addressed through clear policy and oversight. The Help Net Security coverage from May 1 and the HR Director summary both have the full Lenovo data.
The data point with the most immediate policy implications: 74% of employees say more or better cybersecurity training on AI-related risks would reassure them that they and their organization are protected; 73% say it would be reassuring to know their company's cybersecurity team is using AI to address risks; and 70% say stricter policies on how employees can use AI would provide reassurance. Employees are not resisting governance — they are asking for it.
About Shadow AI Policy: We build AI acceptable use policy tools for HR and operations teams at 50–500 person companies. We publish guides on shadow AI, acceptable use policies, and AI governance, updated as regulations and AI tools change.
Three things are happening simultaneously that affect any company using AI in the workplace. First, if you have employees in Colorado or use AI in employment decisions affecting Colorado residents, your June 30 compliance deadline is on hold — but the underlying legal obligations under Title VII and the ADA are not paused. Second, if you have operations or customers in the EU, the August 2, 2026 EU AI Act high-risk deadline is still live regardless of what happens in Brussels. Third, the Lenovo data is a direct signal to audit what AI tools your employees are actually using today, not just what you have approved — because up to one-third of your workforce may already be operating outside your governance perimeter.
Yes, and specifically on two fronts. First, if your policy does not yet address AI tools used in hiring, performance management, or workforce decisions, add that section now — the Colorado stay doesn't remove federal anti-discrimination exposure. Second, your policy needs to explicitly address which tools are approved for which data types, because the Lenovo and NCA survey data confirm that employees are already making those decisions themselves when the policy is silent. If you haven't defined what "approved use" means in concrete tool-and-data terms, your policy isn't doing the work you think it is.
For now, nothing changes operationally — no order has been signed, and a White House official called the executive order discussion "speculation." However, if a pre-release review regime is established, it could affect the cadence of major model updates from providers like Anthropic, OpenAI, and Google, meaning new capabilities you are planning to deploy may face government review before they ship. Procurement teams evaluating AI vendor contracts should start asking vendors directly how they would handle mandatory pre-release review requirements, and whether their enterprise agreements include provisions for changes in regulatory status.
Tailored to your industry and the AI tools your team uses. Free preview, $79 one-time or $149/mo with monthly updates.
Generate my policy kit →