The AI Regulation Timeline Every Leader Needs to Know

The AI Regulation Timeline Every Leader Needs to Know

Published on April 09, 2026

Seven years. That's how long the U.S. government has been writing, rewriting, signing, revoking, and fighting over how to regulate artificial intelligence. And if you're a leader in any organization using AI today, not knowing this timeline isn't just a gap in your knowledge. It's a risk to your operations.

Here's the full story, from the first executive order to the federal-state battle playing out right now.

2019: The Starting Gun

In February 2019, President Trump signed Executive Order 13859, "Maintaining American Leadership in Artificial Intelligence." This was the American AI Initiative, and its message was clear: the U.S. would lead in AI development, and the federal government would stay out of the way.

No binding regulations. No compliance requirements. Just a directive to prioritize AI research funding and open up government data for AI development. It set the tone for everything that followed.

2020: Principles for Government AI

On December 3, 2020, Executive Order 13960 took a step further. "Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government" established principles for how federal agencies should use AI, including transparency, accountability, and reliability. It also required agencies to inventory their AI use cases.

This was the first time the federal government said, "We need rules for how we use this." Still no private sector mandates, but the conversation had shifted.

The same year, the National AI Initiative Act of 2020 formalized federal AI research coordination. The focus was still on development, not regulation.

2022: The AI Bill of Rights

On October 4, 2022, the Biden White House published the Blueprint for an AI Bill of Rights, outlining five protections Americans should have in an AI-driven world: safe and effective systems, protection from algorithmic discrimination, data privacy, notice and explanation, and human alternatives.

It was aspirational, not enforceable. No penalties. No compliance deadlines. But it signaled where the Biden administration wanted to go.

2023: Biden Goes Big

This is the year AI regulation got real.

In July 2023, the Biden administration secured voluntary commitments from seven major AI companies, including Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI, to manage AI risks. Voluntary, but public. These companies agreed to safety testing, transparency, and information sharing.

Then on October 30, 2023, Biden signed Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It has been called the most comprehensive AI governance document ever issued by the U.S. government. It required safety testing for powerful AI models, set standards for labeling AI-generated content, addressed AI in hiring and employment, and directed agencies to assess AI risks across sectors.

For leaders, this was the first time AI policy started to affect operations directly. If you worked in healthcare, finance, housing, or federal contracting, this order touched your work.

2025: The Reversal

On January 20, 2025, within hours of taking office, President Trump revoked Executive Order 14110.

Three days later, he signed Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence." The framing flipped entirely: existing AI policies were "barriers to innovation." The order mandated a 180-day action plan to sustain U.S. AI dominance with minimal regulatory burden.

Then on December 11, 2025, Trump signed Executive Order 14365, "Ensuring a National Policy Framework for Artificial Intelligence." This one went further than deregulation. It directly targeted state-level AI laws, calling "excessive state regulation" an obstacle to federal AI policy. It directed the Attorney General to establish an AI Litigation Task Force to challenge state AI laws deemed unconstitutional or preempted by federal authority.

2026: The Federal-State Showdown

This is where it gets messy, and where it matters most for organizations operating across state lines.

On March 20, 2026, the White House published its National Policy Framework for Artificial Intelligence, recommending that Congress broadly preempt state AI laws that "impose undue burdens." The framework covers data infrastructure, intellectual property, and a push for uniform federal rules.

But states aren't backing down. California's Transparency in Frontier AI Act went into effect January 1, 2026, requiring developers of large AI models to publish transparency frameworks, submit risk assessments, and report safety incidents. Colorado's AI Act takes effect June 30, 2026, with requirements around high-risk AI decision-making.

And on March 30, 2026, Governor Newsom signed Executive Order N-5-26, establishing AI vendor certification and procurement standards for any company selling AI products to the state of California. The strategic move: by framing it as procurement policy rather than industry regulation, California may be able to sidestep federal preemption entirely.

Congress, meanwhile, has stripped federal preemption provisions from both the "One Big Beautiful Bill Act" and the National Defense Authorization Act after pushback from states. As of April 2026, no federal law preempting state AI regulations has passed.

Meanwhile, Across the Atlantic

The EU AI Act entered into force on August 1, 2024, and is rolling out in phases. Prohibited AI systems (manipulative AI, predictive policing by profiling) were banned as of February 2025. Rules for general-purpose AI models took effect in August 2025. The big compliance deadline, covering high-risk AI systems, hits August 2, 2026.

If your organization operates internationally or uses AI tools from EU-based providers, this timeline matters just as much as the domestic one.

What This Means for Your Organization

Here's the practical reality for leaders right now:

  1. There is no single set of AI rules. Federal and state policies are actively contradicting each other. If you operate in multiple states, you may face different compliance requirements in each one.

  2. Procurement is the new regulation. California's approach, using state purchasing power to enforce AI standards, is likely to be copied by other states. If you sell AI products or services to government, expect vendor certification requirements to multiply.

  3. The EU deadline is real. August 2026 brings enforceable high-risk AI requirements with real penalties. If you use AI in hiring, credit decisions, healthcare, or critical infrastructure, check your compliance posture now.

  4. Document everything. Regardless of which way the federal-state battle goes, the organizations that can demonstrate responsible AI use, transparency in how they deploy it, and clear governance around AI decisions will be in the strongest position.

  5. Watch Congress, not just the White House. Executive orders can be signed and revoked in a single day. Legislation is what sticks. And right now, Congress is the battleground where the real rules will be decided.

The AI regulation story is far from over. But waiting for clarity before acting is itself a decision, and not a good one. Build your governance framework now, document your AI usage, and stay flexible enough to adapt when the rules inevitably shift again.


Melanie Markes is the Director of Business Intelligence at CareerSource Central Florida and founder of Blue Dawn Tech. She writes about AI, data strategy, and building practical technology solutions for leaders.

Back to all articles