The Ethical Blindspots in America’s AI Race
What Sam Altman’s U.S. Senate Testimony Really Revealed
The recent U.S. Senate Commerce Committee hearing from 8th May 2025 featuring testimonies from tech executives at OpenAI, Microsoft, AMD, and CoreWeave revealed critical insights about the direction of AI development and governance in the United States.
As I watched a replay of the proceedings, I found myself reflecting on the profound ethical questions raised, both explicitly and implicitly by these industry leaders.
The Innovation vs. Regulation Balancing Act
A prominent theme throughout the hearing was the tension between rapid innovation and appropriate regulation. Sam Altman characterised AI as “bigger than the internet,” emphasising the transformative potential of this technology while cautioning against regulatory approaches that might impede development.
Altman specifically warned that a “patchwork regulatory framework” across states “would be quite bad” and significantly impair the ability to move quickly.
I’m struck by how this highlights the real challenge of creating coherent governance structures in a federal system. How do we balance the need for national consistency with state-level experimentation in policy?
The executives generally favoured a risk-based approach to regulation, with Altman stating, “I think some policy is good… I think it is easy for it to go too far.” This balanced view acknowledges the need for certain guardrails while emphasising that overly burdensome pre-approval processes could hinder American competitiveness.
The concept of “iterative deployment” where AI tools are put into users’ hands early to allow for co-evolution of the technology and society, was presented as a practical middle ground. I believe this approach merits serious consideration, as it recognises the difficulty of anticipating all risks before real-world deployment while maintaining the ability to adjust as concerns arise.
The Geopolitical Framing
The hearing consistently framed AI development as a geopolitical competition, particularly between the United States and China. Brad Smith of Microsoft stated that “the number one factor that will define whether the United States or China wins this race is whose technology is most broadly adopted in the rest of the world.”
This competitive framing raises ethical questions that I find deeply troubling. Does framing AI development primarily as a “race” incentivise cutting corners on safety? What happens when the imperative to win conflicts with ethical considerations?
I wonder if we’re asking the right questions about what “winning” actually means.
Lisa Su of AMD stated, “AI is truly the most transformative technology of our time. The United States leads today. But what I would like to say is it is a race. Leadership is absolutely not guaranteed.” This perspective highlights both the current American advantage and its fragility.
The historical context of American technological innovation was referenced multiple times, with Altman noting, “I don’t think the internet could have happened anywhere else. And if that didn’t happen, I don’t think the AI revolution would have happened here.”
I find this connection between past technological leadership and future potential compelling, but it also raises questions about what conditions actually foster technological innovation.
The Hidden Ethical Dimensions
The hearing highlighted critical infrastructure needs for advanced AI, particularly around energy and data centres. There was a unanimous call for “dramatically more power generation in this country,” reflecting the enormous energy requirements of AI systems.
Altman’s statement that “eventually the cost of intelligence, the cost of AI will converge to the cost of energy” has profound implications that I can’t stop thinking about. If access to AI capabilities becomes primarily determined by access to cheap, abundant energy, what new forms of technological inequality might emerge? I worry that we haven’t fully grappled with this question.
The permitting processes for energy infrastructure and data centres were identified as major bottlenecks. Michael Intrator of CoreWeave described current permitting processes for building large AI infrastructure as “excruciating,” while Brad Smith pointed to federal wetlands permits administered by the Army Corps of Engineers as taking “often 18 to 24 months.”
Energy source diversity became a point of debate, with some advocating for a mix including natural gas, advanced nuclear, fusion, wind, and solar. Senator Rubio expressed concern about 90% of new generation being solar and wind, calling it “not affordable,” “not abundant,” and “not reliable.”
I think these debates around energy sources add another ethical dimension we must consider, i.e. how do we balance immediate competitive needs with long-term environmental sustainability?
The Workforce and Social Impact Considerations
While technological development dominated the hearing, there were also discussions about workforce implications. Senator Cantwell highlighted the need for “hundreds of thousands of new electricians” to support the physical infrastructure required for AI, pointing to workforce development as a critical component of maintaining leadership.
Lisa Su emphasised that the US should be the “best place to study AI to work in AI,” highlighting both education and immigration policy as key factors in talent development. This raises questions I believe we must address more directly like… How do we ensure educational access doesn’t become another vector for inequality? What happens to workers whose skills become less valuable in an AI-driven economy?
The testimony touched only briefly on the broader social impacts of AI deployment. I found it concerning that the focus remained primarily on economic competitiveness and national security, with less attention paid to how AI might affect inequality, job displacement, or social cohesion.
Shouldn’t these considerations be central to the conversation rather than peripheral?
Child Safety and Vulnerable Populations
Altman provided one of the clearer ethical stances of the hearing when discussing child safety: “For children, there needs to be a much higher level of protection, which means the service won’t do things that they might want.” I was encouraged by this acknowledgement of the need for special protections for vulnerable users, as it represents an important ethical principle.
The hearing referenced learning from past mistakes with the internet and social media, particularly regarding child protection. The “Take It Down Act,” aimed at addressing online child exploitation, was mentioned as relevant legislation that might inform AI governance approaches.
Other ethical concerns raised included misinformation and deepfakes, algorithmic bias and discrimination, intellectual property and compensation for content creators, and environmental impacts. These issues were acknowledged but generally not explored in depth during the testimony.
I wonder if we are at risk of repeating past mistakes by not addressing these concerns more substantively at this stage of AI development?
Towards a More Comprehensive AI Governance Framework
Based on the hearing testimony, I believe several elements are essential for effective ethical AI governance:
Tiered, risk-based regulation that applies different standards based on potential harm (endorsed by Altman)
Clear federal standards to avoid a fragmented regulatory landscape
Streamlined permitting processes for essential infrastructure
Increased investment in domestic energy generation and grid capacity
Enhanced education and workforce development programmes
Refined export controls that balance national security with global adoption of American technology
Public-private partnerships in research and development
International coordination on standards with like-minded countries
The hearing revealed that industry leaders, while cautious about excessive regulation, do recognise the need for government involvement in establishing certain guardrails.
Brad Smith noted that the United States should “be in the game internationally to influence the rest of the world,” pointing to how European privacy laws have become de facto global standards in the absence of American leadership.
I’m left wondering how do we design governance frameworks that can adapt as rapidly as the technology itself? How might we structure feedback mechanisms that allow us to learn from early deployments without allowing significant harms?
Beyond False Dichotomies in AI Governance
The Senate hearing revealed how discussions of AI policy often default to simplistic framings: innovation versus regulation, US versus China, speed versus safety. I believe these binary frameworks obscure more nuanced approaches that might better serve both technological development and ethical considerations.
A more productive framing might consider how governance can enable beneficial innovation while preventing harmful applications. Similarly, international cooperation on safety standards and research could complement healthy competition in commercial applications. What would it look like if we framed the goal as “responsible leadership” rather than simply “winning the race”?
The hearing demonstrated that Congress is engaging seriously with complex AI governance questions, which represents progress compared to earlier technological transitions. However, the focus remained heavily on maintaining competitive advantage, with ethical considerations often positioned as secondary concerns.
As AI continues to develop, I believe a governance framework that places ethical considerations at its centre rather than treating them as constraints on innovation. It would better serve both technological progress and human welfare. This would mean designing systems from the ground up with values like safety, fairness, transparency, and inclusivity in mind, rather than attempting to add these considerations after the fact.
What values should be embedded in our AI systems? Who gets to decide those values? How do we balance the need for rapid innovation with ethical governance?
The Senate testimony offers some valuable insights into current industry and government thinking on AI governance in the U.S. While the competitive framing dominated, there were also signs of a more nuanced understanding beginning to emerge one that recognises both the competitive and cooperative elements required for responsible AI development.
As we consider the implications of this Senate hearing and the future of AI governance, I’d like to invite you to reflect on some questions.
Do you think framing AI development as a “race” against China helps or hinders the development of ethical AI systems? Why?
What values do you believe should be prioritised in AI development and governance? How might these priorities differ across cultures and political systems?
Watch the full Senate hearing here:
If you find value in these explorations of AI, consider a free subscription to get new posts directly in your inbox. All my main articles are free for everyone to read.
Becoming a paid subscriber is the best way to support this work. It keeps the publication independent and ad-free, and gives you access to community features like comments and discussion threads. Your support means the world. 🖤
By day, I work as a freelance SEO and content manager. If your business needs specialist guidance, you can find out more on my website.
I also partner with publications and brands on freelance writing projects. If you're looking for a writer who can demystify complex topics in AI and technology, feel free to reach out here on Substack or connect with me on LinkedIn.