On Thursday, the Senate Commerce Committee hearing room seemed more like a rally for American technological supremacy than an adversarial interrogation. Alongside officials from Microsoft, AMD, and CoreWeave, Sam Altman sat at the witness table and answered questions from politicians who appeared more interested in finding ways to accelerate the growth of the AI business than in closely examining it. Altman stood before Congress two years ago to request regulation. The majority of his testimony this time was devoted to outlining why almost all of the proposed regulations would be disastrous.
Although probably not unexpected given the altered political environment, the change in stance is noticeable. Beginning with the revocation of Biden’s extensive AI executive order, President Trump has made eliminating what he refers to as “barriers” to AI advancement a top focus. Chairman Ted Cruz framed the discussion on defeating China rather than shielding workers or consumers from the negative effects of AI, reflecting this new reality. Cruz declared his intention to present legislation that would provide a “regulatory sandbox” for the development of AI, specifically intended to stop state-level rules from dividing the national market.
Particular emphasis should be paid to Altman’s evolution on regulation. He testified in 2024 that AI presented significant threats that needed to be monitored by the government. He discussed alignment issues, potential abuse, and the necessity of safety requirements prior to the deployment of powerful systems. It appears that Sam Altman has retired from that role. Altman described plans forcing AI engineers to verify their systems prior to public release as “disastrous” for the business when questioned about them on Thursday. When asked if the National Institute of Standards and Technology should at least create technical guidelines for AI safety, he responded with measured vagueness, saying, “I don’t think we need it.” It might be beneficial.
| Category | Details |
|---|---|
| Hearing Date | Thursday (Recent 2026 Senate Session) |
| Committee | Senate Commerce Committee |
| Key Witnesses | Sam Altman (OpenAI CEO), Brad Smith (Microsoft President), Dr. Lisa Su (AMD CEO), Michael Intrator (CoreWeave CEO) |
| Committee Chairman | Senator Ted Cruz (R-Texas) |
| Primary Topic | Artificial intelligence regulation and US-China AI competition |
| Industry Position | Hands-off regulatory approach; increased infrastructure investment |
| Altman’s 2024 Position | Openly embraced AI regulation at previous hearing |
| Altman’s 2026 Position | Rejected specific regulatory proposals; called pre-deployment vetting “disastrous” |
| Cruz Proposal | Regulatory sandbox bill to remove AI adoption barriers and prevent state overregulation |
| NIST Standards Question | Altman response: “I don’t think we need it. It can be helpful.” |
| Trump Administration Priority | Remove barriers to AI innovation; revoke Biden-era AI executive order |
| China Competition Frame | Industry warned US could lose global AI race without continued investment |
| Microsoft Position (Smith) | Technology adoption worldwide will determine US-China AI winner |
| Focus Areas for Investment | Research, education, supply chains, energy infrastructure |
It’s likely that Altman’s opinions actually shifted after witnessing Europe enact what he believes to be unduly stringent AI rules. Additionally, OpenAI’s competitive position may have changed in ways that make regulation less appealing than it was two years ago. Even reasonable safety regulations might feel like weights on your ankles when you’re fighting to keep market supremacy against rivals like Anthropic, Google, and a dozen well-funded startups. Altman did support “sensible regulation that does not slow us down,” but he did not provide many details about what that may entail.
The hearing was dominated by the China frame in ways that seemed almost purposefully intended to cut off regulatory discussion. The reasoning was best expressed by Brad Smith of Microsoft: “The number one factor that will define whether the United States or China wins this race is whose technology is most broadly adopted in the rest of the world.” It implies that any policy that reduces the competitiveness of American AI companies in international markets essentially gives Beijing the upper hand. Given that Republican politicians are already inclined to see technology competition via a national security lens, this is a compelling case.
Anyone who remembers the 2010s social media arguments will find it almost nostalgic to watch this unfold. Similar claims were made by Facebook, Twitter, and YouTube regarding the necessity of moving quickly and avoiding regulations that could benefit overseas rivals. As a result, there was a decade of mostly unfettered platform expansion, which led to issues that we are still attempting to resolve, including misinformation, polarization, effects on mental health, and privacy abuses. The rhetorical patterns seem similar, despite the AI industry’s insistence that this time is different.
Cruz’s suggested regulatory sandbox is an intriguing idea that was taken from the regulation of financial technology, where it has produced conflicting outcomes. The goal is to establish safe areas where businesses can test new technology without having to instantly adhere to all current regulations. In reality, sandboxes can either spur positive innovation or operate as a shield to evade responsibility until products are too well-established to be properly regulated. The design and monitoring of the sandbox will often determine the outcome.

Where the industry truly wants government involvement is revealed by the emphasis on infrastructure spending. Altman and the other executives emphasized the need for more funding for AI research, education initiatives to produce trained labor, supply chain security for chips and components, and energy infrastructure to power the enormous data centers needed for AI training. The government removing barriers and supplying resources while remaining out of product choices and safety regulations is the acceptable face of industrial policy.
Whether Congress will truly adopt important AI legislation this session is still up in the air. Cruz’s sandbox idea would have to overcome the typical legislative challenges, such as conflicting priorities, partisan disagreement, and lobbying from impacted businesses, but the Senate Commerce Committee plays a significant role in any tech-related measures. Republicans and tech corporations appear to agree on light regulation, according to the hearing, but this agreement may break if public demand for more stringent monitoring arises from high-profile AI failures.
There is a conflict between Altman’s public remarks and OpenAI’s real performance as you watch him testify. While managing safety issues that its own researchers occasionally bring up in public, the business has consistently developed powerful AI systems—GPT-4, DALL-E, and now numerous versions of models with increasing capabilities. A large portion of the current policy debate lies in the space between “we’re being extremely careful” and “please don’t make us prove we’re being careful through regulatory compliance”.
It is evident that the larger tech sector feels that this is a critical time to influence AI governance before regulations become set in stone. The same themes were stressed by every executive present at the hearing on Thursday: the risks of European-style regulation, the necessity of investment over restriction, Chinese competitiveness, and American leadership. It’s a well-coordinated message conveyed with the assurance of a sector that is aware of its technological prowess and political momentum, which politicians find difficult to comprehend.
What happens to workers displaced by AI, how to stop algorithmic discrimination, who is responsible when AI systems cause harm, and whether the concentration of AI development in a small number of corporations generates monopolistic dangers were all topics that were not extensively covered throughout the hearing. There was little real interaction with the inquiries, only nominal acknowledgement. Growth, competition, and maintaining America’s lead over China were the main priorities.
As he departed the session, Sam Altman had effectively changed the focus of the regulatory discussion from “how do we make AI safe” to “how do we make AI American.” Whether you think the tech sector will self-regulate responsibly or whether you think the social media era taught us that sometimes moving quickly and breaking things leaves broken things that aren’t fixed will determine if that’s beneficial for the nation or simply for OpenAI.