AI Action Plan

2025 United States federal artificial intelligence policy document From Wikipedia, the free encyclopedia

"Winning the Race: America's AI Action Plan" is a policy blueprint published by the White House on July 23, 2025, setting out more than 90 federal policy actions intended to secure United States dominance in artificial intelligence.[1] The 28-page document was developed by the Office of Science and Technology Policy (OSTP), with Dean Ball, then serving as OSTP's Senior Policy Advisor for AI and Emerging Technology, as its primary staff drafter.[2] It was released at a summit titled "Winning the AI Race," hosted by the Hill and Valley Forum and the All-In podcast at the Andrew W. Mellon Auditorium in Washington, D.C.[3] President Donald Trump signed three accompanying executive orders at the event.[4]

PresidentDonald Trump
SignedJuly 23, 2025 (2025-07-23)
Quick facts President, Signed ...
AI Action Plan
Winning the Race: America's AI Action Plan
Seal of the President of the United States
PresidentDonald Trump
SignedJuly 23, 2025 (2025-07-23)
Summary
Sets out more than 90 federal policy actions across three pillars to maintain U.S. global leadership in artificial intelligence
Close

The plan was widely described as a significant departure from the Biden administration's AI executive order, which Trump had revoked on his first day in office.[5] Where the Biden approach had emphasized safety, risk management, and equity, the Action Plan focused on deregulation, infrastructure expansion, international competition with China, and the removal of what the administration characterized as ideological bias from AI systems.[6]

Background

Executive Order 14179 and the RFI process

On January 23, 2025, Trump signed Executive Order 14179, titled "Removing Barriers to American Leadership in Artificial Intelligence," which directed the development of an AI Action Plan to "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security."[7] On February 6, 2025, the National Science Foundation's Networking and Information Technology Research and Development (NITRD) program, acting on behalf of OSTP, published a Request for Information (RFI) in the Federal Register, inviting comment from the public, academia, industry, and governments by March 15, 2025.[8]

OSTP published more than 10,000 public comments in April 2025.[9] Notable respondents included OpenAI, which proposed a five-part strategy encompassing regulatory preemption, export controls, copyright protections, infrastructure investment, and government adoption;[10] Google, which advocated for energy infrastructure reform, innovation-friendly international approaches, and continued federal research funding;[11] and Anthropic, which focused on national security testing of frontier models, strengthening export controls, enhancing the security of AI labs, and scaling energy infrastructure.[12]

Relationship to prior policy

The Action Plan represented the second Trump administration's replacement for the regulatory framework established by Executive Order 14110 (October 2023), which had directed federal agencies to develop safety standards, require reporting from developers of powerful AI systems, and address algorithmic discrimination.[13] Trump had revoked that order on January 20, 2025. During his first term, Trump had issued Executive Order 13859 (February 2019), "Maintaining American Leadership in Artificial Intelligence," which similarly prioritized U.S. competitiveness but with a narrower scope.[14]

Contents

The Action Plan is organized around three pillars, with cross-cutting themes of workforce empowerment, freedom from ideological bias in AI systems, and prevention of misuse by adversaries.

Pillar I: Accelerate AI Innovation

The first pillar contains policy recommendations aimed at ensuring the United States leads in both the development and application of AI systems. Key provisions include:

  • Deregulation: OSTP is directed to launch an RFI identifying federal regulations that hinder AI innovation. The Office of Management and Budget (OMB) is to work with agencies to identify and revise or repeal such regulations, consistent with Executive Order 14192 ("Unleashing Prosperity Through Deregulation"). OMB is also directed to consider a state's AI regulatory climate when making funding decisions for AI-related discretionary programs, and the Federal Communications Commission is asked to evaluate whether state AI regulations interfere with its authorities under the Communications Act of 1934.[AP 1]
  • Open-source AI and compute access: The plan calls for improving financial markets for compute (e.g., spot and forward markets), expanding the National AI Research Resource (NAIRR) pilot, and having the National Telecommunications and Information Administration convene stakeholders to drive adoption of open-source and open-weight AI models by small and medium-sized businesses.[AP 3]
  • AI evaluations: NIST and CAISI are to publish guidelines for federal agencies to conduct evaluations of AI systems, support the development of measurement science for AI, and convene meetings at least twice per year for agencies and researchers to share best practices. The Department of Energy and NSF are directed to invest in AI testbeds across economic sectors including agriculture, transportation, and healthcare.[AP 4]
  • AI safety research: DARPA is to launch a technology development program, in collaboration with CAISI and NSF, to advance AI interpretability, control systems, and adversarial robustness. The plan also calls for prioritizing these areas in the forthcoming National AI R&D Strategic Plan.[AP 5]
  • Government adoption: The plan formalizes the Chief Artificial Intelligence Officer Council (CAIOC) as the primary interagency body for AI adoption, calls for a talent-exchange program across agencies, and directs the General Services Administration to create an AI procurement toolbox. All federal employees whose work could benefit from access to frontier language models are to be given access and training.[AP 6]
  • Deepfakes: NIST is directed to consider developing its Guardians of Forensic Evidence program into a formal guideline, and the Department of Justice is to issue guidance on deepfake standards for agency adjudications, building on the TAKE IT DOWN Act signed in May 2025.[AP 8]

Pillar II: Build American AI Infrastructure

The second pillar addresses energy, data centers, semiconductors, cybersecurity, and workforce development for physical AI infrastructure.

  • Permitting reform: The plan calls for new NEPA Categorical Exclusions for data center construction, expanded use of the FAST-41 permitting process, a possible nationwide Clean Water Act Section 404 permit for data centers, streamlined regulations under the Clean Air Act and other environmental statutes, and making federal lands available for data center and power generation construction.[AP 9]
  • Energy and the grid: Recommendations include preventing premature decommissioning of power generation resources, implementing advanced grid management technologies, embracing nuclear fission and fusion as well as enhanced geothermal, and reforming power markets to align incentives with grid stability.[AP 10]
  • Semiconductors: The CHIPS Program Office at the Department of Commerce is directed to focus on return on investment for the taxpayer and to remove "extraneous policy requirements" for CHIPS-funded projects, while also integrating advanced AI tools into semiconductor manufacturing.[AP 11]
  • Cybersecurity: An AI Information Sharing and Analysis Center (AI-ISAC) is to be established by the Department of Homeland Security, and agencies are directed to promote secure-by-design AI development and to incorporate AI considerations into existing cybersecurity incident response playbooks.[AP 12]

Pillar III: Lead in International AI Diplomacy and Security

The third pillar covers export promotion, export controls, international governance, and national security evaluation of frontier models.

  • AI exports: The Department of Commerce is to solicit proposals from industry consortia for "full-stack AI export packages" encompassing hardware, models, software, applications, and standards, with financing coordinated by the Economic Diplomacy Action Group and other federal agencies.[AP 13]
  • Export controls: The plan calls for leveraging location verification features on advanced AI compute to prevent diversion to countries of concern, expanding end-use monitoring, developing new controls on semiconductor manufacturing sub-systems, and using diplomatic tools including the Foreign Direct Product Rule and secondary tariffs to align allied export controls with U.S. policy.[AP 15]
  • Frontier model evaluation for national security: CAISI is to evaluate frontier AI systems for national security risks, particularly in CBRNE and cyber domains, in partnership with frontier developers and national security agencies. The plan also calls for evaluating foreign AI systems used in U.S. critical infrastructure for backdoors and other malicious behavior.[AP 16]
  • Biosecurity: Institutions receiving federal research funding are to be required to use nucleic acid synthesis providers with robust sequence screening and customer verification procedures, with enforcement mechanisms rather than voluntary attestation.[AP 17]

Accompanying executive orders

On the same day the Action Plan was released, Trump signed three executive orders to begin implementation:[4][15]

  1. Promoting the Export of the American AI Technology Stack: Established the American AI Exports Program under the Department of Commerce, directing the development of full-stack AI export packages and mobilization of federal financing tools, with an implementation deadline of October 21, 2025.[16]
  2. Accelerating Federal Permitting of Data Center Infrastructure: Streamlined permitting for AI infrastructure projects on federal land and revoked the Biden administration's January 2025 Executive Order 14141 on AI infrastructure, which had required environmental reviews and alignment with clean energy goals.[17]
  3. Preventing Woke AI in the Federal Government: Established "unbiased AI principles" requiring that large language models procured by the federal government be "truthful" and "ideologically neutral," and directed OMB to issue implementing guidance by November 20, 2025.[16]

Subsequent developments

On December 11, 2025, Trump signed an additional executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," which directed the Department of Justice to establish an AI Litigation Task Force to challenge state AI laws in court, instructed Commerce to identify state laws deemed "onerous" within 90 days, and threatened to withhold BEAD program funding from states with AI regulations the administration considered conflicting with federal policy.[18]

On March 20, 2026, the White House released a legislative framework outlining seven areas where it sought congressional action: child safety, community protections, intellectual property, free speech, innovation, workforce development, and federal preemption of state AI laws. The framework was developed by OSTP Director Michael Kratsios and White House Special Advisor David O. Sacks.[19]

Efforts to preempt state AI regulation through legislation faced repeated setbacks. A proposed 10-year moratorium on state-level AI regulation was included in the One Big Beautiful Bill Act but was stripped by the Senate in a 99–1 vote following opposition from a coalition of 40 state attorneys general and 260 state legislators.[20] A similar provision in the National Defense Authorization Act for fiscal year 2026 also failed.[18]

Reception

Industry

Major technology industry groups expressed support. TechNet called the plan a "policy framework [that] takes critical steps towards developing a strong domestic workforce, building critical AI infrastructure, launching public-private partnerships, removing regulatory barriers to innovation, strengthening the domestic AI stack, and enhancing U.S. global AI diplomacy."[5] Law firms including Skadden, White & Case, and Ropes & Gray published client advisories highlighting potential business opportunities in AI exports, data center permitting, and federal procurement, while also noting uncertainties around how "ideological bias" would be defined and enforced in practice.[6][21][22]

Policy analysts and think tanks

The Council on Foreign Relations published a multi-author assessment characterizing the plan as "a tale of two impulses within the administration." Contributors praised the inclusion of frontier model evaluations for national security risks but noted that it was unclear what would happen when evaluations found that a model had crossed a capability threshold, observing that there was "no shared understanding of what those mitigations should entail or who decides what qualifies as sufficient." Defense analysts highlighted the recommendation for an AI and Autonomous Systems Virtual Proving Ground but questioned whether the administration would follow through on resourcing. The CFR analysis also pointed to tensions between the goal of countering Chinese influence in international organizations and the administration's broader policies of withdrawing personnel and funding from multilateral institutions.[23]

The Brookings Institution published an extensive critique arguing that the plan gave "insufficient attention to accountability, ethics, and transparency," creating risks related to "unregulated AI systems, erosion of privacy, algorithm bias, polarization, misinformation, exploitative surveillance, unchecked corporate control over critical technologies, [and] unintended consequences on democratic governance." Brookings scholars also noted the tension between the plan's sweeping mandates for the National Science Foundation (e.g., leading new research labs, expanding NAIRR, developing testbeds) and the simultaneous defunding and destabilization of NSF under the administration, including the cancellation of more than 1,600 active grants. Separately, the Brookings analysis criticized the plan for lacking a domestic competition policy to prevent concentration among a small number of dominant AI firms.[24]

Brookings fellow Tom Wheeler argued that the "Preventing Woke AI" executive order amounted to "top-down censorship" and drew comparisons to content control practices in China, where AI outputs must align with the official ideology of the Chinese Communist Party. Wheeler and other commentators noted the vagueness of the term "ideological bias" and argued that the policy could burden smaller AI developers disproportionately, since large firms could absorb the compliance costs more easily.[25]

MIT Technology Review noted that, compared with Biden-era executive orders, the Action Plan was "mostly devoid of anything related to making AI safer," with the notable exception of the deepfake provisions.[26]

National security and biosecurity

The plan's provisions on frontier model evaluation and biosecurity drew substantive engagement from national security researchers and institutions. The law firm Steptoe observed that, despite avoiding the phrase "AI safety," the Action Plan included language and provisions that would be "familiar to experts from the AI safety world," including sections on interpretability, AI control, model evaluation, and CBRNE risks, and suggested that the administration was "indeed concerned by a range of AI safety issues, even if it does not use that phrase."[27]

The RAND Corporation published a primer for biosecurity researchers noting that the plan "plants a flag in the sand by establishing model evaluation as a new and rapidly evolving science" serving the government's need to understand frontier model risks in domains such as CBRN threats. RAND highlighted that the plan represented continuity with the prior administration's policies in the biological threat domain, while reframing the overall emphasis from "safety" to "opportunity," a shift previously signaled by the renaming of the U.S. AI Safety Institute as the Center for AI Standards and Innovation (CAISI).[28]

The Johns Hopkins Center for Health Security welcomed the inclusion of nucleic acid synthesis screening requirements and CAISI-led frontier model evaluations but recommended that CAISI adopt a risk-based approach prioritizing pandemic-level biological threats rather than running dozens of broad evaluations, arguing that the latter approach risked being costly and unsustainable while potentially missing the most consequential risks.[29] The Council on Strategic Risks praised the plan's biosecurity sections and urged the administration to set a timeline of nine months or less for moving from discussion to implementation of frontier model evaluations for national security risks.[30]

International context

The Action Plan was released amid a broader global contest over AI governance. The European Union had begun enforcing the EU AI Act and launched its AI Continent Action Plan in April 2025. Analysts at the Real Instituto Elcano characterized the U.S. approach as "largely hands-off" compared with the EU's risk-based regulatory framework, noting that the U.S. strategy had benefited its private sector, which led global private AI investment in 2024 with nearly $110 billion. However, the same analysis observed that the competitive dynamic risked overshadowing efforts on AI safety and risk management internationally.[31] The German Marshall Fund noted that the term "AI safety" had become "increasingly politically contentious" in the United States since 2025, though several federal policies, including the Action Plan's directives on high-risk AI use cases, continued to incorporate risk-management practices that resembled elements of the Biden-era approach.[32]

See also

Notes

  1. America's AI Action Plan, pp. 3–4.
  2. America's AI Action Plan, p. 4.
  3. America's AI Action Plan, pp. 4–5.
  4. America's AI Action Plan, p. 10.
  5. America's AI Action Plan, pp. 9–10.
  6. America's AI Action Plan, pp. 10–11.
  7. America's AI Action Plan, pp. 6–7.
  8. America's AI Action Plan, pp. 12–13.
  9. America's AI Action Plan, pp. 14–15.
  10. America's AI Action Plan, pp. 15–16.
  11. America's AI Action Plan, p. 16.
  12. America's AI Action Plan, pp. 18–19.
  13. America's AI Action Plan, p. 20.
  14. America's AI Action Plan, p. 20.
  15. America's AI Action Plan, pp. 21–22.
  16. America's AI Action Plan, pp. 22–23.
  17. America's AI Action Plan, p. 23.

References

Related Articles

Wikiwand AI