How Trump’s AI Action Plan Reshapes Enterprise AI
Plus, the hidden cost of “Almost Right” AI, a policy roundup from around the globe, and Virginia DOGE’s AI-enabled regulatory cull
Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! Yesterday was a busy one for major model update announcements, with OpenAI launching their very first Open Weight models, and Anthropic waiting in the wings with Claude 4.1. But, all eyes are waiting for the long-awaited, impending release of OpenAI’s ChatGPT-5, expected any day now. We’re hard at work at Trustible analyzing the model cards for these new releases, and we’ll be publishing our findings and updated ratings on aimodelratings.com soon.
In the meantime, in today’s edition (5-6 minute read):
What the Trump Administration’s AI Action Plan means for enterprises
The Hidden Cost of ‘Almost Right’ AI
Global & U.S. Policy Roundup
DOGE and Virginia using AI to eliminate regulatory rules
1. What the Trump Administration’s AI Action Plan Means for Enterprises
Recently, the Trump Administration released "Winning the AI Race: America’s AI Action Plan" (AI Action Plan), following a January executive order aimed at enhancing U.S. AI leadership. This plan proposes roughly 90 policy recommendations within three key pillars: AI innovation, infrastructure, and national security with international engagement. Although primarily focused on federal actions, several recommendations could significantly impact private-sector companies involved in AI development, deployment, or utilization.
Notably, three themes emerge regarding enterprise implications:
Firstly, the administration introduces uncertainty by challenging existing regulatory frameworks. Recommendations such as a "shadow" moratorium on state AI regulations by restricting federal funding could disrupt businesses navigating state-level rules. Similarly, proposed FTC reviews of AI-related investigations could further cloud compliance expectations.
Secondly, contradictions arise between some AI priorities and other administrative goals, notably around talent and energy policies. For example, recommendations to streamline energy infrastructure permitting for AI data centers clash with existing sustainability efforts. Similarly, proposed removal of Diversity, Equity, and Inclusion (DEI) references from NIST’s AI Risk Management Framework could complicate internal workforce policies and talent attraction.
Thirdly, federal interest in setting AI standards could result in cascading obligations for government contractors. Updates to federal procurement guidelines emphasizing ideologically unbiased AI models could complicate AI vendor selection, while requirements for critical infrastructure cybersecurity and AI incident response guidance necessitate organizations to adopt stringent federal standards.
The AI Action Plan also highlights opportunities for private-sector influence, such as industry-specific stakeholder engagement aimed at accelerating national AI standards adoption. Recommendations to bolster AI literacy and reskilling via tax incentives further provide tangible benefits to businesses.
Finally, despite significant ambitions, the Plan’s effectiveness relies heavily on congressional action and federal agency capacity, both of which face uncertainties, particularly after recent workforce reductions and potential political shifts in 2026. Nonetheless, the administration’s stance sends clear signals shaping the broader AI regulatory and operational environment, compelling businesses to proactively adapt strategies in anticipation of changing federal priorities.
You can read our full analysis on our blog here.
2. The Hidden Cost of ‘Almost Right’ AI
For all the hype around AI as a productivity tool, a stubborn truth is emerging: using AI doesn’t always save time. In some cases, it might even make things worse.
The risk isn’t just in bad outputs—it’s in over-relying on AI that sounds confident but gets the details wrong. The more trust you place in these systems without sufficient oversight, the more likely you are to spend your time cleaning up after them.
Two recent studies underscore this dynamic. METR, an AI research lab, recently found that experienced engineers working on open source development were often slowed down by AI code suggestions. Even with expert users, “plausible but wrong” outputs led to wasted debugging time and reduced net productivity. Similarly, Stack Overflow’s recent annual developer survey data revealed a hidden productivity tax tied to AI-assisted coding. Developers often spent more time fixing or validating answers than they would have spent solving the problem themselves. Finally, Atlassian’s recent developer survey found that while engineers self-reported development speed up from AI, all the downstream processes, such as code reviews, manual testing, deployment etc, were getting slowed down because they were now overwhelmed with more code changes and bugs than before. This highlights a bit of a ‘pipeline’ bottleneck problem that is caused by uneven AI use along a value chain.
If teams aren't careful, they may spend more time double-checking and redoing AI outputs than if they’d just done the work manually. Worse, over-reliance can create blind spots—leading people to accept wrong answers without realizing it.
Our Take: AI performs best on constrained tasks where outcomes are easy to verify—like short text generation or specific coding challenges. For complex tasks like multi-step coding, the validation cost can outweigh the generation benefit compared to a human baseline.
3. Global & U.S. Policy Roundup
Here is our quick synopsis of the major AI policy developments:
U.S. Federal Government. Beyond releasing the AI Action Plan, the Trump Administration is leaning on Asian countries to develop an AI approach that differs from the EU. The Securities and Exchange Commission also announced an AI task force aimed at improving AI adoption across the agency. In Congress, Senator Mike Rounds (R-UT) introduced a bipartisan bill on setting guardrails for AI usage in financial services.
U.S. States. AI-related policy developments at the state level include:
California. ADM regulations.The California Privacy Protection Agency voted unanimously to finalize regulations on cybersecurity audits, risk assessments, and automated decisionmaking technology. Rules for automated decisionmaking technology will take effect on January 1, 2027.
Florida. Governor Ron Destantis (R-FL) indicated that he will unveil legislation related to AI safeguards. DeSantis was a critic of the attempted federal AI moratorium on state AI laws. The expected legislative proposals would continue to put the Governor at odds with the Trump Administration, which generally views AI regulations as unnecessarily burdensome and hampering AI innovation.
Illinois. Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act, which prohibits using AI to provide mental health and therapeutic decision-making. The law would allow licensed behavioral health professionals to use AI tools for administrative purposes and supplementary support services. The new law comes as OpenAI announced new mental health guardrails for ChatGPT.
Michigan. The Michigan Unemployment Insurance Agency launched a new chatbot to help “deliver quick and accurate responses to questions from workers and employers.” This is the first Michigan state agency to utilize a chatbot on its public facing website.
Texas. A recent report found that two data centers outside of San Antonio consumed approximately 463 million gallons of water between 2023 and 2024. The water usage was particularly jarring given that Texas residents were under water restrictions during the same period of time due to an ongoing drought. Data centers in Texas are expected to account for almost 7% of total water usage by 2030.
Asia. AI-related policy developments in Asia include:
China. During the annual World Artificial Intelligence Conference in Shanghai, the Chinese government unveiled its AI Action Plan. Notably, the Chinese AI Action Plan was released 3 days after the Trump Administration released its AI Action Plan. China’s plan emphasizes more participation in international fora to shape AI standards and increase AI adoption.
Singapore. Microsoft and Digital Industry Singapore announced a new Agentic AI Accelerator program. The announcement comes as more AI companies have set-up shop in Singapore over the past year, in part due to the country's business-friendly climate.
Thailand. The government’s Department of Special Investigation (DSI) is using AI to substantiate claims of cheating Thailand's 2024 Senate elections. Specifically, the DSI is using AI to analyze “14 terabytes of CCTV footage and other voting data” as part of its investigation.
EU. The next set of EU AI Act obligations took effect on August 2, the most notable provisions being obligations for general purpose AI (GPAI) systems. The new set of obligations kick-in as Google announced it would sign on to the EU’s GPAI Code of Practice, whereas xAI agreed to sign on to only the security and safety chapter.
Middle East. As Middle East countries continue to lead on AI infrastructure, those countries are facing water issues. The United Arab Emirates (UAE) in particular is one of the most water-stressed countries in the world, yet it is estimated that it will use up to 61 billion liters of water annually by 2030.
North America. AI-related policy developments in outside of the U.S. in North America include:
Canada. The Canadian government is committing $1 million to a joint AI safety initiative with the U.K. The announcement comes as part of a broader collaborative effort on AI between Canada and the U.K.
Mexico. The Mexican government announced that it is developing its own LLM. The government intended to make its LLM deployable to “5 million university students and more than 5 million businesses,” however the country does face some infrastructure constraints. The move to develop a more culturally appropriate LLM has gained traction this year, after a bloc of Latin American countries unveiled plans to develop a Latin America-specific LLM.
4. DOGE and Virginia using AI to eliminate regulatory rules
Virginia and the U.S. Department of Government Efficiency (DOGE) are both rolling out agentic AI tools to scan regulatory code—flagging outdated, redundant, or conflicting rules for removal. Virginia’s system is already combing through the state’s administrative code. DOGE says its tool will review over 200,000 federal rules, and agencies like HUD and CFPB are already testing it. DOGE’s initiative is supposedly only targeting rules that are no longer required by law, although that determination requires legal expertise to make.
This is one of the first real deployments of AI into the public sector with direct regulatory implications. While these tools are technically “advisory,” reports suggest agencies are treating the AI outputs as strong signals. At HUD, some regulations were flagged as outside statutory authority, even when they weren’t. It’s not clear what evaluation criteria are being used—or who has final say. If AI-generated flags trigger removals or policy shifts, these could qualify as high-risk or decision-making systems under several AI governance frameworks.
Nothing in Governor Youngkin’s announcement details what AI system is being used, or how it may be adapted for the nuances of regulatory text which have specific stylistic elements that could cause issues for AI systems.
Our take: This is likely only the beginning of AI being applied directly to regulatory tasks and informing decisions made by regulators. Many organisations may be looking to see how successful these initiatives are, as analyzing policy documents and suggesting simplification could be hugely valuable for large enterprises with overlapping and conflicting policies.
As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai.
AI Responsibly,
- Trustible Team