By the Shadow AI Policy team
The week ending April 30, 2026 was one of the most active for AI governance news in recent memory — with employment-related AI laws advancing in multiple states, a high-stakes EU deadline hanging in legal limbo, and Florida's governor-backed AI Bill of Rights dying on the House floor hours after the Senate passed it. This briefing covers four stories HR managers, legal leads, and compliance officers at small-to-midsize companies need to read now: Connecticut's Senate passing a sweeping employer AI disclosure bill; Florida's AI Bill of Rights being blocked by the House Speaker on the first day of a special session; the EU AI Act's high-risk deadline remaining at August 2, 2026 after trilogue talks collapsed without agreement; and the European Commission's reported move to classify ChatGPT as a Very Large Online Search Engine under the Digital Services Act.If your company uses AI in hiring, performance evaluation, scheduling, or termination decisions — anywhere in the U.S. or EU — this week's news makes one thing clear: the compliance clock is running whether or not a final law has passed. Audit which AI tools currently touch employment decisions, document what they do, and confirm your HR vendor agreements include bias-testing and disclosure provisions. The time to start is now, not when the governor signs or the trilogue ends.
Date: April 21, 2026. Connecticut's Senate passed Senate Bill 5 (SB 5) by a 32–4 vote on April 21, making it one of the most far-reaching state AI employment bills in the country. The legislation spans 64 pages and 37 sections, touching nearly every dimension of how AI intersects with commercial life: automated hiring pipelines, frontier model safety requirements, synthetic content labeling, and state employment protections.
The provision that matters most to HR teams: starting October 1, 2026, any employer in Connecticut using AI to inform hiring, scheduling, or employment decisions must notify employees and applicants. The bill's definition of covered tools is intentionally broad. It includes any system that uses computation to generate outputs such as scores, rankings, predictions, classifications, or recommendations, and is a substantial factor in making or materially influencing an employment decision. That language likely covers third-party resume screeners, scheduling algorithms, and performance analytics platforms already in use at many companies today.
There's a significant legal risk attached. The bill amends Connecticut's anti-discrimination statutes to make clear that the use of automated decision technology is not a defense against claims of employment discrimination. Courts and regulators may consider evidence of anti-bias testing, but such testing does not eliminate liability — meaning AI systems must be treated like any other employment decision tool, with full accountability for discriminatory outcomes. Violations would be enforced by the Connecticut Attorney General as unfair or deceptive trade practices.
Beyond hiring, SB 5 creates a Connecticut AI Academy, expands AI workforce training, requires employers to disclose when layoffs are related to AI use, and directs state agencies to help small businesses adopt AI responsibly and competitively. The bill now moves to the House, which declined to take up last year's AI proposal. Read the full bill analysis from the Connecticut Business and Industry Association and the CT Mirror.
If you operate in Connecticut or have remote employees there, don't wait for the House vote. Use this period to generate a tailored AI policy kit that addresses employment AI disclosure requirements before an October 1 deadline becomes a liability.
Dates: April 28, 2026. Florida's four-day special legislative session opened on April 28 with Governor Ron DeSantis pushing hard for passage of SB 2D — the state's "Artificial Intelligence Bill of Rights." The Florida Senate gaveled in and immediately took up the Artificial Intelligence Bill of Rights, waiving it out of committee. The bill passed 37–1. But the legislation never reached the House floor.
House Speaker Daniel Perez declared that Florida lawmakers would not create new guardrails on artificial intelligence. The only topic the Florida House would address during the four-day special session was redrawing the state's congressional maps. Perez added that it's important to defer to the federal government on AI regulation, referencing Trump's executive order preempting most AI state restrictions. DeSantis responded sharply, accusing House Republicans on social media of catering to what he called the "Big Tech cartel."
The governor's AI Bill of Rights would have affirmed existing protections against AI porn including explicit images featuring minors, prohibited Florida government offices from using Chinese-created AI tools, and provided parental controls on AI for minors. The bill's fate signals the sharpest visible tension between state AI regulation advocates and those who argue federal preemption should take the lead — a live debate that is now explicitly splitting Republicans. Read full coverage from WLRN and the Troutman Pepper privacy law blog.
For compliance officers: the Florida outcome doesn't reduce your risk. State attorneys general and regulators may continue to pursue investigations and enforcement actions based on alleged deceptive, misleading, discriminatory, or unfair AI practices, even where those claims are framed outside AI-specific statutes. The federal preemption posture does not limit state enforcement risk in the near term. Generic consumer protection and anti-discrimination law remains fully enforceable.
Date: April 28, 2026. If your company uses AI tools that touch hiring, performance evaluation, task allocation, worker monitoring, or termination — and you operate in or sell to the EU — this is the week's most legally consequential development. The second political trilogue between the European Parliament, the Council of the EU, and the European Commission on April 28, 2026 ended without agreement. If the Digital Omnibus is not formally adopted before August 2, 2026, the original AI Act's provisions — including the high-risk obligations and their current timeline — will apply from that date as written.
The European Parliament, the Council of the EU, and the Commission had convened to negotiate the Omnibus — a proposal to postpone high-risk AI compliance deadlines. The AI Act classifies AI systems used in employment-related decisions as high-risk, including tools used for recruitment, candidate selection, performance evaluation, task allocation, monitoring of workers, and decisions on promotion or termination. These high-risk system obligations are scheduled to apply from August 2, 2026.
A further trilogue has been scheduled for May 13, 2026. The proposed extension, if eventually agreed, would shift the Annex III compliance deadline — which covers employment AI — to December 2, 2027. But that deal is not law yet. The proposed extension of the EU AI Act's Annex III compliance deadline has not been published in the Official Journal of the European Union as of April 28, 2026, which means August 2, 2026 is still the only operative legal deadline. Organizations that paused high-risk AI compliance planning after the political agreement on extension are taking a live legal risk. Read the full DLA Piper employment law analysis at DLA Piper GENIE and official EU timeline information at the European Commission.
The table below maps the key EU AI Act deadlines as they currently stand — including what is operative law today versus what the Digital Omnibus proposes:
| Obligation | Current Operative Deadline | Proposed Digital Omnibus Deadline | Status (as of Apr 30, 2026) |
|---|---|---|---|
| Prohibited AI practices & AI literacy | February 2, 2025 ✅ | No change proposed | Already in force |
| GPAI model governance obligations | August 2, 2025 ✅ | No change proposed | Already in force |
| High-risk AI (Annex III) — incl. HR/hiring tools | August 2, 2026 | December 2, 2027 | ⚠️ Trilogue ongoing — Aug 2026 is live law |
| AI transparency/watermarking requirements | August 2, 2026 | November 2, 2026 | ⚠️ Trilogue ongoing |
| High-risk AI embedded in regulated products | August 2, 2027 | August 2, 2028 | ⚠️ Trilogue ongoing |
Date: Week of April 21–28, 2026. Separately from the AI Act, the European Commission is reportedly preparing to designate ChatGPT as a Very Large Online Search Engine (VLOSE) under the Digital Services Act (DSA) — a regulatory category that carries significantly stronger obligations than the default rules that currently apply to OpenAI. The European Commission is apparently preparing to designate ChatGPT as a "very large online search engine" under the DSA. Citing sources within the Commission, German newspaper Handelsblatt reported the classification could be announced within days, subjecting ChatGPT developer OpenAI to some of the bloc's toughest digital regulations. Officials have not confirmed the timing but acknowledge the assessment is under way.
Under the DSA, services with more than 45 million monthly active users in the EU can be designated as either very large online platforms or very large online search engines — labels that define strict obligations aimed at limiting systemic risks to users and society. Data published by OpenAI indicates that ChatGPT's search functionality reached more than 120 million monthly users in the EU, well above the threshold.
OpenAI already adheres to certain DSA rules, but a VLOSE designation would expand its obligations considerably, potentially requiring changes to how ChatGPT is designed and how its systems manage risk. Once the Commission designates a platform or search engine under the DSA, the company has four months to meet the new requirements. These include setting up clear contact channels for both regulators and users; reporting suspected criminal activity; providing accessible and user-friendly terms of service; and ensuring transparency around advertising practices, recommendation algorithms, and content moderation decisions.
For HR and legal teams relying on ChatGPT in workflows involving EU employees or customers: a VLOSE designation would likely require OpenAI to publish more detailed risk assessments and content moderation reports, giving compliance teams better visibility into systemic risks — but it also signals that EU regulators are actively scrutinizing AI tools used at scale. Read full coverage from Computing.
Date: April 2026. Taken together, the Connecticut and Florida stories are part of a much larger wave. Since mid-March 2026, the number of new AI laws passed in 2026 went from 6 to 25. Another 27 bills have passed both chambers in their legislative process and could be on their way to becoming law soon. State lawmakers have introduced over 600 AI bills with requirements for private entities in the 2026 legislative sessions so far.
The employment angle is the most directly relevant for HR and operations teams. California lawmakers were active last week, passing numerous bills out of committees, including four chatbot bills, three healthcare-related bills, two employment bills, and a provenance bill. New York Governor Kathy Hochul signed amendments to the RAISE Act on March 27, 2026, shifting the law toward a transparency and reporting-based framework for AI developers. And as of April 2026, EU institutions are actively considering pushing key compliance deadlines to 2027–2028, reflecting implementation challenges and concerns about regulation pace.
The clearest practical pattern across all of these state-level bills: the bills regulate the use of AI in employment settings such as hiring, firing, promotion, compensation, or displacement issues. Companies that haven't yet inventoried where AI touches those decisions — even indirectly, through vendor platforms — are flying blind. The full state-by-state tracker is maintained by Troutman Pepper's Privacy + Cyber + AI blog and the law firm Cooley's April 24 state AI law roundup.
About Shadow AI Policy: We build AI acceptable use policy tools for HR and operations teams at 50–500 person companies. We publish guides on shadow AI, acceptable use policies, and AI governance, updated as regulations and AI tools change.
If you use any AI tool that screens resumes, ranks candidates, monitors performance, or supports scheduling decisions, you may already be subject to new state disclosure obligations — with Connecticut's October 1, 2026 deadline being the most immediate for employers with staff there. Even if none of the current bills have passed in your state, state attorneys general can still pursue enforcement under existing consumer protection and anti-discrimination statutes. Start by inventorying which AI tools touch employment decisions and confirm what disclosure and audit obligations your HR software vendors carry.
Yes, if you have employees or customers in Connecticut, California, or the EU. Connecticut's SB 5 — which still needs a House vote but has strong Senate momentum — would require employer notification any time AI is used to inform hiring, scheduling, or employment decisions, starting October 1, 2026. The EU AI Act's high-risk deadline of August 2, 2026 applies to employment-related AI right now, regardless of whether the Digital Omnibus extension is eventually agreed. Companies that paused compliance work are taking a live legal risk.
Annex III of the EU AI Act is the list of high-risk AI use cases subject to the most stringent compliance requirements — documentation, human oversight, bias testing, conformity assessments, and registration in an EU database. It explicitly includes AI used for recruitment, candidate selection, performance evaluation, task allocation, worker monitoring, and promotion or termination decisions. If your company uses AI-assisted hiring or workforce tools and you employ people in the EU or sell services to EU-based businesses, these rules apply to you as a deployer. The August 2, 2026 deadline is current law; the proposed extension to December 2027 has not yet been formally adopted.
Tailored to your industry and the AI tools your team uses. Free preview, $79 one-time or $149/mo with monthly updates.
Generate my policy kit →