President Donald Trump released new proposals for regulating and supporting artificial intelligence (AI) on July 23, 2025. “America’s AI Action Plan” insists that the U.S. “needs to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field, and dismantle unnecessary regulatory barriers that hinder the private sector in doing so.”
The White House plan sports “three pillars: innovation, infrastructure, and international diplomacy and security,” some of which, the plan suggests, justify President Trump rescinding former President Biden’s AI executive order.
“To maintain global leadership in AI,” the plan says, “America’s private sector must be unencumbered by bureaucratic red tape.” To further that goal, the White House is tasking federal agencies to reconsider "current Federal regulations that hinder AI innovation and adoption," and to ensure that any “AI-related discretionary funding programs… consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award” (in line with the failed 10-year moratorium on state AI regulation).
Of particular interest to the insights industry, given that the Federal Trade Commission (FTC) is our primary U.S. regulator, the White House tasks the FTC with reviewing "investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation,” and to “review all FTC final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set-aside any that unduly burden AI innovation."
The plan’s biggest concern with so-called “frontier” AI systems appears to be freedom of speech: “It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas.”
Worrying that AI is not being adopted fast enough, the White House wants to see a “coordinated Federal effort” to establish a “a dynamic, “try-first” culture for AI across American industry,” including regional “regulatory sandboxes or AI Centers of Excellence,” multistakeholder “domain-specific … (e.g., in healthcare, energy, and agriculture)” convenings, “led by NIST at DOC.”
The plan also includes “a priority set of actions to expand AI literacy and skills development, continuously evaluate AI’s impact on the labor market, and pilot new innovations to rapidly retrain and help workers thrive in an AI-driven economy."
As part of the plan's focus on "AI-Enabled Science," the White House proposes to "Require federally funded researchers to disclose non-proprietary, non-sensitive datasets that are used by AI models during the course of research and experimentation."
The plan refers to “[h]igh-quality data” as “a national strategic asset as governments pursue AI innovation goals and capitalize on the technology’s economic benefits. Other countries, including our adversaries, have raced ahead of us in amassing vast troves of scientific data. The United States must lead the creation of the world’s largest and highest quality AI-ready scientific datasets, while maintaining respect for individual rights and ensuring civil liberties, privacy, and confidentiality protections.” America’s AI Action Plan includes the U.S. Office of Management and Budget (OMB) writing CIPSEA regulations "on presumption of accessibility and expanding secure access, which will lower barriers and break down silos to accessing Federal data, ultimately facilitating the improved use of AI for evidence building by statistical agencies while protecting confidential data from inappropriate access and use."
AI evaluations “are how the AI industry assesses the performance and reliability of AI systems,” the plan says, and rigorous ones “can be a critical tool in defining and measuring AI reliability and performance in regulated industries.” The White House would: support “the development of the science of measuring and evaluating AI models, led by NIST at DOC, DOE, NSF, and other Federal science agencies”; convene twice-yearly meetings at the Department of Commerce “for Federal agencies and the research community to share learnings and best practices on building AI evaluations”; invest “in the development of AI testbeds for piloting AI systems in secure, real-world settings, allowing researchers to prototype new AI systems and translate them to the market” with “participation by broad multistakeholder teams” across “a wide variety of economic verticals touched by AI, including agriculture, transportation, and healthcare delivery”; and “empower the collaborative establishment of new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote the development of AI.”
To protect American innovations in AI, the White House proposes to “collaborate with leading American AI developers to enable the private sector to actively protect AI innovations from security risks, including malicious cyber actors, insider threats, and others.”
Read the full report, including the other major sections on building out infrastructure, and diplomacy and international security issues, here.
About the Author

Based in Washington, DC, Howard is the Insights Association's lobbyist for the marketing research and data analytics industry, focusing primarily on consumer privacy and data security, the Telephone Consumer Protection Act (TCPA), tort reform, and the funding and integrity of the decennial Census and the American Community Survey (ACS).
Howard has more than two decades of public policy experience. Before the Insights Association, he worked in Congress as senior legislative staffer for then-Representatives Christopher Cox (CA-48) and Cliff Stearns (FL-06). He also served more than four years with a science policy think tank, working to improve the understanding of scientific and social research and methodology among journalists and policymakers.
Howard is also co-director of The Census Project, a 900+ member coalition in support of a fair and accurate Census and ACS.
He has also served previously on the Board of Directors for the National Institute for Lobbying and Ethics and and the Association of Government Relations Professionals.
Howard has an MA International Relations from the University of Essex in England and a BA Honors Political Studies from Trent University in Canada, and has obtained the Certified Association Executive (CAE), Professional Lobbying Certificate (PLC) and the Public Policy Certificate (PPC).
When not running advocacy for the Insights Association, Howard enjoys hockey, NFL football, sci-fi and horror movies, playing with his dog, and spending time with family and friends.