The new federal plan promotes deregulation, open models, and faster AI adoption in complex sectors like healthcare.
What happened
On July 23, the Trump administration unveiled its “Winning the Race” AI Action Plan, directed to accelerate artificial intelligence development by reducing regulatory hurdles and prioritizing innovation. The plan directs the Department of Commerce to revise federal AI risk management frameworks and remove what it deems ideological language in order to streamline business with the government.
Healthcare, described as one of the slowest sectors to adopt AI, is the main focus. The plan outlines the creation of regulatory sandboxes, evaluation standards for high-stakes environments, and the removal of Biden-era protections related to diversity, climate change, and misinformation.
Going deeper
The AI Action Plan stems from President Trump’s January 2025 executive order and includes initiatives to bolster computing infrastructure, enable open-source model development, and coordinate national standards for safe AI deployment. The administration positioned this as a coordinated effort to encourage a “try-first” culture across American industry.
The National Institute of Standards and Technology (NIST) is being tasked with leading AI testing and the development of AI testbeds in healthcare and other sectors. Federal procurement guidelines will prioritize contracts with companies building "objective" AI models. Trump also signed an executive order banning what he called “woke AI” from government use.
At the same time, the plan notes that federal funding may hinge on how states regulate AI, placing pressure on state-level lawmakers. Legal experts expect this to trigger litigation related to federal-state regulatory conflicts, particularly in healthcare reimbursement and Medicaid oversight.
What was said
Stephen Bittinger, a partner at law firm Polsinelli, predicted litigation around CMS compliance and separation of powers in state Medicaid programs, noting providers will face "evolving compliance hurdles" with implications for HIPAA, cybersecurity, and reimbursements.
The EHR Association praised the plan’s push for innovation but urged the creation of a consistent, national regulatory model. National Nurses United, alongside dozens of other organizations, joined the AI Now Institute’s call for a “People’s AI Action Plan” to ensure AI development prioritizes public good over profit.
Trump, speaking at the AI Summit, framed the initiative as a historic moment to “reassert the future which belongs to America,” and stated that AI systems must reflect “truth, fairness, and strict impartiality.”
The big picture
According to The Guardian, Trump said, “America must once again be a country where innovators are rewarded with a green light, not strangled with red tape...” At the AI summit, he introduced a sweeping federal plan to fast-track AI adoption by scaling back regulations and emphasizing national loyalty in AI development. The initiative includes executive orders targeting “woke” AI and relaxing environmental rules for data centers. While the plan was welcomed by tech leaders, civil rights and labor groups warned it allows industry to shape AI policy “at the expense of our freedom and equality.”
FAQs
What is a regulatory sandbox and how does it apply to AI in healthcare?
A regulatory sandbox is a controlled environment where companies can test new technologies under relaxed regulatory conditions. In healthcare, this could allow AI tools to be trialed in clinical settings with real data, enabling faster feedback and adoption.
How might this plan affect AI-related litigation in healthcare?
By tying federal funding to state AI regulations and reducing federal oversight, the plan is expected to spark legal challenges related to Medicaid compliance, HIPAA standards, and reimbursement policies.
What is the AI-ISAC, and what role will it play?
The AI Information Sharing and Analysis Center (AI-ISAC) is a proposed body for threat intelligence sharing. It tries to help federal agencies and partners identify and respond to AI-related security risks more effectively.
Why is NIST being given a larger role in AI governance?
NIST is tasked with developing testbeds and standards for evaluating AI systems, particularly in high-risk sectors like healthcare, to ensure system performance and reliability meet federal expectations.
What does the People's AI Action Plan propose as an alternative?
The People’s AI Action Plan, backed by advocacy and civil rights groups, calls for AI governance centered on public interest, including worker protections, environmental responsibility, and equitable access, as a counter to deregulation-focused strategies.