Hello and welcome to Regulator. If you’re a subscriber, you are stalwart and true, and if you’re here from the internet, prove your chivalry and worth by subscribing to The Verge here. (And if you’re David Sacks: we said what we said.)
As of Tuesday, President Donald Trump has committed to signing some sort of executive order that would do something that would give him some federal control over AI regulation. I state this in the vaguest of terms for two reasons: First, there’s still no good constitutional rationale for an executive order to override laws that states pass for themselves, let alone on artificial intelligence, and the version of the executive order that leaked from the White House in November immediately presented an overwhelming amount of legal issues (to say nothing about the David Sacks of it all).
Second, Trump was just as vague about what he hopes to accomplish when he made the announcement — naturally, on Truth Social.
Screenshot: Truth Social
Unfortunately, this presidency is run on tyrannical vibes and Diet Coke, so one can safely assume that while whatever emerges from the White House won’t pass legal scrutiny, Trump sure as hell will push his people to do whatever he wants them to, to do it quickly, and to not question his judgment about it. (Imagine, if you will, that “states’ rights” is “the East Wing of the White House,” and “control over America’s AI policy” is “a ballroom.”)
But the potential political fallout won’t be felt in Washington — at least, not immediately.
This week, I’m talking to Brendan Steinhauser, the CEO and cofounder of the bipartisan Alliance for Secure AI, about whether AI regulations — or the lack thereof — will become a hot-button issue for voters in the upcoming midterm. Steinhauser’s a Republican political strategist based in Austin who’s primarily worked for Texas candidates, managing the campaigns of Reps. Michael McCaul (former) and Dan Crenshaw, as well as Sen. John Cornyn, all of whom were elected. His resume also includes a stint in early-stage Tea Party politics, serving as national director of federal and state campaigns for the grassroots organization FreedomWorks from 2009 to 2012.
Suffice to say, Steinhauser understands red state voters, but found plenty of common cause with Democrats to create the Alliance, launching the nonprofit in July 2025. (Honestly, I am shocked that in the year 2025, a political organization can have leadership and staff who’ve worked for the Biden administration, Senate Democrats, the DCCC, the Texas Republican congressional delegation, and Speaker Mike Johnson’s office. But that’s AI horseshoe theory for you.)
Polling on the issue, he admits, is early: So far, two polls conducted by the conservative Institute for Family Studies in partnership with YouGov have found that voters reject the idea of the federal government overriding state AI regulations. But there’s growing evidence that red state voters are increasingly skeptical of the AI industry, and Steinhauser sat down with me to walk through what he was seeing: religious backlash, social backlash, and, quite unusually, state-governments-against-Washington-Republicans-chasing-a-moratorium backlash.
“I’m someone who’s advised Republicans for 20 years off and on, and worked with them and campaigned for them and dealt in grassroots politics, trying to understand voters and advise candidates on how to think about voters and talk to voters,” he told me during a phone interview. “I just don’t think they’re seeing [things] six months from now.”
- “Square’s product chief on the death of the penny and the future of money”, Decoder: Square’s Willem Avé talks to The Verge editor-in-chief Nilay Patel about the AI automation, investing in crypto, and what it’s like working for Jack Dorsey.
- “AI ‘creators’ might just crash the influencer economy”, Terrence O’Brien: On the slop-filled internet, Jeremy Carrasco uses his platforms to spread AI literacy.
- “The war on disinformation is a losing battle”, James Ball: How the systems that fight misinformation and disinformation became misconstrued and dismantled.
- Review: “A first look at Google’s Project Aura glasses built with Xreal”, Victoria Song: It’s kinda like a pair of chunky sunglasses that runs Android apps.
- Review: “Trump Mobile’s refurbished iPhones are an unsurprisingly bad deal”, Dominic Preston: Would you like a three-year-old used iPhone for almost $500?
“Guys, start thinking about where things are going to be six to nine months from now”
This interview has been edited for clarity.
You started working on the Alliance for Secure AI in 2024, and the nonprofit launched in July 2025. Between then and now, what happened to the average voter awareness of AI as an issue?
Brendan Steinhauser: I think those things are difficult to measure other than through public opinion polling and things like that, and looking at news media coverage and anecdotal stories in everybody’s lives. I would say that throughout 2024, the public opinion related to AI and the awareness of what was happening and how fast things were moving was just not there. But sometime around late fall, winter of 2024, it started to pick up quite a bit. I honestly give a lot of credit to journalists covering the rise of advanced AI and saying this could rapidly advance. Kevin Roose, for example, at The New York Times and Ezra Klein and Ross Douthat and others like that. With the megaphone that they have, it’s started to bring this issue more front and center for a lot of people.
The other thing that contributed in a great way was the DeepSeek moment. I think that really made more mainstream people — I really hate to use the word “mainstream,” but I’m talking about people that are going about their daily lives, who are focused on other things — it got their attention and they started to focus more on AI. Certainly, the release of new models helped over time. But I would say that the media coverage and the DeepSeek occurrence really sparked a lot of this.
Going from the DeepSeek moment to the first vote on the AI state law moratorium took only months, but once the idea of a moratorium had been floated in Congress, the states reacted very negatively towards it. Could you explain, for an audience who doesn’t really follow that stuff as closely as you and I do, what that looked like?
One thing about state legislators and governors and attorneys general is that they’re proud of the work that they do on the issues they care about. A lot of times, they’re on different sides of important political questions. But when they get to work together on things in a bipartisan fashion and they’re able to pass laws and get the bills signed into law, they’re proud of that accomplishment and they want to see that continue. I do think the fact that so many states passed laws related to AI policy created a situation where they were dug in and defending those laws.
You have this interesting mix of Republicans and Democrats from around the country, whether they were lawmakers or attorneys general or governors, who said: We worked really hard to do something important here, and we don’t want the federal government to just overturn our work. So they started speaking out about this. They started posting on social media. They started calling their members of Congress, their US senators. And of course, if they had a good number, they were calling the White House, saying, Don’t overturn what we did in our state.
Verge readers will be familiar with AI regulation efforts in states Colorado and California, but maybe not with the explosion of AI regulations coming from deep Republican states. You’re from Texas, which passed a comprehensive law regulating AI earlier this year. What is driving AI regulation in red states and why are they so protective of it against federal intrusion?
That’s the really important question and hard to answer succinctly. So I’ll try and start with broad strokes and we can get into more detail. But I think Texas represents red states in that it’s very conservative. It’s a very religious state, it’s very socially conservative, so many of the lawmakers and the governor and others are looking at it through that lens. They’re looking at the impacts of advanced AI on their people, on the health and well-being of their people, especially young people. They’re worried about the social ills, the potential for negative impacts on families. They’re worried about this epidemic of mental health crises and suicides that we’ve seen related to AI.
They’re also worried about AI being seen as almost something that will attempt to replace God. That is a theme that I hear again and again here in Texas, when I meet with faith leaders and regular people — this instinctual reaction to this technology that is being discussed as if it were an omniscient, omnipresent thing. So that offends their sensibilities.
There’s also this important concept in the Constitution — the 10th Amendment, the idea of federalism, which many conservatives and libertarians have supported, at least in theory, for a long time. I think that they come out more in support of the 10th Amendment when they see that the federal government is trying to overturn something they’ve worked on. We saw this when Republicans pushed back against the Obama administration on healthcare 15 years ago. We’ve seen this in a few instances in the Trump administration with Republicans here in power. But mostly, I’ve seen it on AI because I think it’s an issue that these lawmakers want to get ahead of and make sure they’re protecting their citizens. It’s just something they care passionately about.
To be honest with you, I’ve been pleasantly surprised and somewhat encouraged by the bipartisan nature of this effort. The fact that you have these very far-right Republicans in the legislature in Texas and these far-left Democrats getting together on this and joining hands has been pretty spectacular. So I think that really shows how powerful this movement can be.
That’s actually happening in the Texas state legislature? Like, if I were to look at their public statements at both sides of the spectrum, they would be united on this?
Oh yeah. I have a letter I can share with you that was just sent over to Sen. Cruz and Sen. Cornyn a couple weeks ago. So the Texas Senate has 31 members, and this letter was able to get nine Republicans and seven Democrats to sign on with their names together, all their signatures, in no particular order. It had some really great language in there about AI: protecting kids, AGI, that sort of stuff. I think there would have been more and there can be more people who add their names to that, but that’s just who they were able to get in the middle of the [National Defense Authorization Act] preemption fight on short notice.
Sen. Cruz, though, has been very vocal about having a moratorium, if I’m remembering that correctly. And it does seem a little bit emblematic of a split within the Republican Party itself on AI. What accounts for the part of the GOP that is okay with a moratorium?
I’ve had plenty of conversations with the senator himself and with his team, and I have always observed that Sen. Cruz has certain core principles that he believes in. Yes, he analyzes things through a political lens as well, but you can see plenty of examples of him fighting for things that he believes in because he believes in them, whether it’s going to Iowa and campaigning against ethanol subsidies and winning the state of Iowa in 2016, or speaking out for free trade when President Trump wasn’t too happy about that in the first term, or speaking up for the state of Israel, for the Jewish people in the Jewish state of Israel specifically — taking flack on that. Like, those are examples of things where he just believes things and he fights for them. I think this is similar.
I do think that Andreessen Horowitz and David Sacks and other similar folks — like Joe Lonsdale, who I know and I like — have a disproportionate influence there. But I also know that Sen. Cruz is tracking some of these harms. He is tracking the rapid advancement of AI. He’s thinking about AGI and ASI. I think he looks at it as, we’re in this situation where we have to race to beat China in developing AI, and if we don’t, they will [develop it] anyway. And he doesn’t believe — and I’ve asked him this — he doesn’t believe that we can make a deal with our adversary to race commercially and to not race to superintelligence.
He also does have that kind of small-L libertarian mindset of not wanting, in his view, burdensome or onerous regulations on industry, which I get and I respect. And I don’t either. I just think that AI, and advanced AI in particular, is a different category for so many reasons because of the capabilities. Because we could easily lose control of it. And these guys are clearly not taking the safety precautions that they need. There’s just a ton of evidence of that. Long answer, apologies.
Do you think his views reflect the rest of the GOP that’s pro-moratorium? Like, how do they think about the development of AI and why does that require a moratorium?
I think the specific answer to that is, that’s what the AI industry wants. If you include lobbying money, plus PACs, plus all the 501(c)(3)s, 501(c)(4)s, etc., etc., they’re on pace to spend $250 to $300 million on all of those things pushing this agenda, and they don’t want to deal with the safeguards that we all support. The industry just has a tremendous amount of impact and influence. And until the Republican senators and members of the House really see more and more regular people engaging in this, calling them up, going to their town hall meetings, speaking out on social media, they’re going to go with that immediate incentive.
I’m someone who’s advised Republicans for 20 years off and on, and worked with them and campaigned for them and dealt in grassroots politics, trying to understand voters and advise candidates on how to think about voters and talk to voters. I just don’t think they [the Republican Party] are seeing six months from now. They’re not seeing around the corner. They’re looking at what’s immediately in front of them, and it’s all the money and all the threats of Big Tech and all the influence, all the great things that Andreessen and Sacks are telling them. And it’s like, Guys, start thinking about where things are going to be six to nine months from now.
In the actual midterms, yeah.
Oh yeah. The midterms, and also potentially with the economy. If we are in an AI bubble and we’re automating jobs to AI and there are freezes on jobs and all these other things — yeah, there’s tariffs in the mix and there’s other stuff in the mix, but Republicans are going to have to deal with that. Unfortunately, politically speaking, they’re going to own an economy if it’s a bad economy. And guess what? AI is going to be a huge part of that story if it is a bad economy. I don’t think that is inevitable, but I think that if that does happen, voters — including their base and independent voters — are going to say, well, You guys gave all this leeway to Big Tech. You didn’t do anything to place a check on them, and then now look at where we are.
It takes people getting active on this issue and I think in time they will more and more, but unfortunately, it may take some really bad things that cause people to take more action. The economy, first and foremost, but people are definitely paying attention also to these harms of the kids and young people. The 60 Minutes thing last night about Character AI was a big deal.
It’s hard not to see it everywhere.
Yeah, you really have to be burying your head in the sand to not see it and have a basic intuition about what’s going on. Is it good or bad? Is it a mix? I do think most people are like, Yeah, it’s kind of a mixed bag, but we don’t need to create a digital God. Most people don’t believe that’s possible, but they’re worried about what the developers are trying to build. And I think most people also would say they shouldn’t just be able to accelerate without any guardrails — develop it to help people, but we need some checks on these companies, because look what they did with social media. I do think that’s a very mainstream position.
And now, more Holiday Season Recess.
I’m sorry, I can’t help but return to The Discourse:


