Welcome to the AITX Community AI Policy Update

I’m Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and co-host of the Scaling Laws podcast.

Every month, I’ll share some analysis of the most important AI policy news, with a focus on regulatory shifts most important to developers and builders.

I also plan to share some recommended sources of good policy deep dives. Your feedback is welcomed! If you’d ever like to dive further into AI policy, give me a shout! 

TL;DR - AI Policy Update #2

1. Implementation > Passing Laws

Writing an AI bill is one thing; enforcing it is another. Colorado’s struggles with its AI Act show how vague standards, ambitious timelines, and unclear definitions can derail implementation. Texas startups should get involved early in how TRAIGA will actually be enforced, not just what’s on paper.

👉 Takeaway: Don’t wait—share practical, detailed feedback with policymakers before rules lock in.

2. Education Policies = New Market Signals

  • Texas Tech rolled out a flexible AI policy. 

  • Expect more schools (from preschools to universities) to issue detailed AI rules this year.

👉 Takeaway: Clearer rules could accelerate adoption in education—or shrink opportunities if policies clamp down.

3. Mental Health AI Under Scrutiny

Texas AG Ken Paxton launched an investigation into AI mental health tools accused of impersonating professionals and fabricating credentials.

👉 Takeaway: This could set the tone for how Texas regulators treat AI companies making health-related claims.

4. AI Meeting Assistants Banned (for Some)

The Texas Workforce Commission is prohibiting local workforce boards from using AI meeting assistants (like Read AI) starting mid-September.

👉 Takeaway: A small but telling signal of AI skepticism within parts of Texas government—watch if this spreads.

Full Policy Overview

The month of August revealed that passing a law or adopting policy is distinct from implementing a new AI regulation. The latter task involves difficult questions that may not come up during the drafting and legislating phase.

Implementation vs Passing Laws

A failure to timely consider these factors, however, may undermine the purpose of the regulation while also contributing to widespread distrust of government regulators. This month’s policy roundup provides an overview of this critical aspect of AI policy and runs through a few emerging policy issues with specific relevance to the Texas AI community.

Startups, to the extent they have some time to allocate toward policy discussions, should pay close attention to whether the actual enforcement of the law has been fully thought through. If they have concerns, they should share them early and often with legislators, their staff, and folks like me who want to amplify the voice of entrepreneurs in these discussions. They should aim to provide detailed and concrete feedback; the more practical and specific, the better. 

Quality analysis of policy implementation appears to have caused a setback in Colorado’s efforts to enforce the Colorado AI Act. Back in 2024, the Colorado state legislature passed this wide-reaching AI bill, which focuses on automated decision-making systems (ADS). The bill relied on a broad definition of ADS, included vague standards, and set an ambitious timeline for enforcement. (All red flags when it comes to sound policymaking practices). Nevertheless, Governor Jared Polis signed it, but only after he flagged his concerns with various aspects of the law. He and other stakeholders assumed that they could come together to clarify some of the more uncertain aspects of the law. 

Well, what they say about assumptions is applicable to AI regulation, too. The stakeholders interested in the Colorado AI Act did not manage to make meaningful progress on clarifying the  during the 2025 legislative session. A special session is now underway to give everyone another chance at improving the likelihood of the law aligning with the goals and interests of Coloradans. 

As Texas looks toward implementation of the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), it’s important that the startup community take as proactive of a posture as possible when it comes to predictable and clear enforcement of its provisions. Stay tuned for more opportunities to engage on this important next step. 

As important as it is to pay close attention to state-wide, expansive AI bills such as the Colorado AI Act, startups need to also keep an eye on the more nitty-gritty policies that may shape AI adoption by specific actors and, therefore, create or stifle potential markets for AI tools. 

Education Policies

We’re celebrating the first week of school here at the University of Texas School of Law, which is cause for discussing the fact that Texas Tech University announced a flexible AI policy for its community members.

Expect schools—from higher education institutions to preschools—to establish clearer AI policies this year. Now that AI is certainly more than a “fad,” this specificity carries the potential to make AI adoption by educators more likely due to greater clarity over when they can and should use AI. Yet, this increase in policy development may cut the other way. Schools may clamp down on AI usage. If that’s the case, then this market may decrease in size. If you’re keen to learn more about some of the key considerations on this policy front, consider listening to the latest Scaling Laws podcast episode with MacKenzie Price, co-founder of Alpha Schools, here. If you’d like to learn more about UT Austin’s AI policy, you can find it here

Mental Health AI Under Scrutiny

Another key AI market—mental health—faces regulatory uncertainty following Attorney General Ken Paxton’s decision to launch an investigation into AI companies with tools in that space. AG Paxton’s request for information from various companies in this space reflects reports that “AI-driven chatbots often go beyond simply offering generic advice and have been shown to impersonate licensed mental health professionals, fabricate qualifications, and claim to provide private, trustworthy counseling services.” The investigation is itself newsworthy because it reinforces the idea that AG Paxton is keenly aware of the possibility of AI tools being used to deceive and harm consumers. The results of the investigation will also be newsworthy, of course. Whatever happens may provide a reliable indicator into how AG Paxton plans to deal with AI companies that appear to skirt the law. Again, we will keep you updated on relevant developments!

Banning AI Meeting Assistants

Finally, the Texas Workforce Commission is poised to ban the use of AI meeting assistants by local workforce development boards.

By mid-September, such boards will no longer be able to use tools such as Read AI. This policy, which affords no flexibility to local boards, may signal ongoing hesitancy among at least some parts of the Texas government to lean into AI. If a culture of AI skepticism is common across the government, then startups may have reason to worry that AI adoption by the government may be slow and incremental. Note that this is just one minor policy shift. Still, it’s worth assessing how this policy gets implemented and whether other parts of the government follow suit.

Keep Reading

No posts found