Welcome to the first AITX AI Policy Update

I’m Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and co-host of the Scaling Laws podcast.

Every month, I’ll share some analysis of the most important AI policy news, with a focus on regulatory shifts most important to developers and builders.

I also plan to share some recommended sources of good policy deep dives. Your feedback is welcomed! If you’d ever like to dive further into AI policy, give me a shout! 

Policy Update #1

tl;dr President Trump has unveiled an ambitious yet ambiguous push for "AI Dominance," kicking things off with a sprawling AI Action Plan that balances innovation with safety.

The Plan has garnered rare consensus from across the AI spectrum—celebrated for championing open-source development, accelerating AI adoption, and investing in scientific applications of AI. But with major legal hurdles ahead and key policy areas left untouched, the real challenge now lies in turning this blueprint into functional policy.

President Donald Trump pledged to set the nation on a path toward “AI Dominance” on the first day of his administration. He gave his advisors 180 days to develop recommendations to achieve that lofty, yet vague goal. Last week, they published their findings in the AI Action Plan. The Plan includes 30 provisions related to three pillars: AI innovation, AI infrastructure, and AI international diplomacy and security. 

To date, there appears to be a general consensus among AI stakeholders—from doomers to accelerationists—that the Plan struck a proper balance between encouraging innovation and establishing safeguards. AI Safety community members, for example, praised the inclusion of recommendations to bolster AI interpretability work and empower the Center on AI Standards and Innovation. Individuals and institutions pushing for continued innovation applauded the Plan’s embrace of open source and commitment to removing regulatory barriers that hinder development of the AI stack. 

Still, like a new dresser from IKEA, this thing still has to be assembled and, just like that same dresser, there’s no guarantee that will happen according to plan. Over the coming days and weeks, it is likely that the administration will release several executive orders that further explain how some of the bolder provisions of the Plan will come into place. Yet, many of those orders, such as those pertaining to the use of federal lands for data centers and establishment of categorical exceptions for certain AI infrastructure projects from environmental regulations, will meet stiff legal opposition. In short, there’s a lot of assembling to do. 

Though the Plan did cover an impressive amount of territory, it also omitted some key areas of AI regulatory concern. National security experts noted that some of the provisions related to defense were short on details and substance. Folks concerned about the lack of access to data bemoaned the absence of intellectual property provisions. These omissions, however, likely reflect the fact that the President’s powers to influence AI policy only go so far. Congress must drive policy shifts related to patents and copyright, for instance. 

If you’d like to dive deeper into the Plan, consider listening to this episode of the Scaling Laws podcast.

You should also feel free to shoot me an email: [email protected]

For those actively working on developing and deploying AI products, here is my list of the three most important provisions: 

Encourage Open-Source and Open-Weight AI

What it says:

This section champions AI models that are made freely available by developers for anyone to download and modify. These models are seen as having “unique value for innovation” because startups can use them flexibly without being dependent on a closed model provider.

In a start pivot from an earlier era in which open source was flagged by many as a security threat, the plan notes their benefits for commercial and government adoption, especially for sensitive data that cannot be sent to closed vendors. It also highlights their essential role in academic research, which often needs access to a model's weights and training data for rigorous experiments.

The government’s recommended actions include improving the financial market for compute to ensure access to large-scale computing power for startups and academics, building a “lean” National AI Research Resource operations capability, publishing a new National AI Research and Development Strategic Plan, and convening stakeholders to drive adoption of open-source and open-weight models by small and medium-sized businesses.

Why it matters:

For small AI labs and developers, the emphasis on open-source and open-weight AI is a significant boon. It means increased access to foundational AI models and computing resources without the prohibitive costs or restrictive terms often associated with proprietary systems.

This fosters a more level playing field, enabling smaller entities to innovate, experiment, and build upon existing advanced models. The focus on establishing a healthy financial market for compute aims to democratize access to the computational power necessary for AI development, which is typically a major barrier for smaller players. Furthermore, government support for adoption by small and medium-sized businesses could create a broader market for solutions built on these open foundations. 

It’s unclear how the largest labs will respond to this call for more open source development. Rumor has it that both Meta and OpenAI have been questioning their respective open source projects.

Enable AI Adoption

What it says:

This section identifies that the primary bottleneck to harnessing AI's full potential is often the limited and slow adoption of AI, especially within large, established organizations.

It points out that critical sectors like healthcare are particularly slow due to distrust, lack of understanding, complex regulations, and unclear governance. The plan calls for a coordinated federal effort to establish a “dynamic, ‘try-first’ culture for Al across American industry.”

Recommended actions include establishing regulatory sandboxes or AI Centers of Excellence where researchers and enterprises can rapidly deploy and test AI tools, with support from agencies like the FDA and SEC. It also proposes launching domain-specific efforts (e.g., in healthcare, energy, agriculture) to develop and adopt national standards for AI systems and measure productivity gains.

Why it matters:

This provision directly addresses a major challenge for AI product developers: getting your innovations integrated into existing industries. By promoting “regulatory sandboxes” akin to those seen in some states for drones, the government aims to create environments where new AI tools can be tested and validated more easily, potentially reducing the time and cost associated with navigating complex regulatory landscapes.

For small AI labs, this could mean faster pathways to market and clearer guidelines for developing industry-specific solutions. The push for national standards in key sectors like healthcare and energy will provide much-needed clarity, reducing uncertainty for developers building solutions for these critical areas and fostering broader acceptance.

Invest in AI-Enabled Science

What it says:

This part of the plan acknowledges that existing AI models are already hastening scientific advances, with systems generating models for protein structures and novel materials, and promising to formulate hypotheses and design experiments.

It emphasizes that sustaining and amplifying this transformation requires critical changes in how science is conducted, including enabling scientific infrastructure and turning theories into industrial-scale enterprises.

Key policy actions include investing in automated cloud-enabled labs for various scientific fields, supporting focused-research organizations for fundamental scientific advancements, incentivizing researchers to release high-quality datasets publicly, and requiring disclosure of non-proprietary datasets used by AI models in federally funded research. There are also provisions calling for specific research into AI interpretability and explainability. 

Why it matters:

For AI developers, particularly those interested in scientific applications, this section signals a significant increase in funding and infrastructure for AI-driven research. The investment in automated cloud-enabled labs means more accessible and powerful environments for developing and testing AI models for scientific discovery. The emphasis on high-quality, publicly available datasets is critical—as we all know, data is the fuel for AI development and better data leads to better models. Furthermore, the focus on AI interpretability, control, and robustness directly addresses key challenges in making AI more reliable and trustworthy, especially for high-stakes scientific and industrial applications. Breakthroughs in these areas will lead to more robust and deployable AI systems, opening up new avenues for innovation and application.

Keep Reading

No posts found