I’ve learned to greet the grand pronouncements of technology as the ultimate saviour with a healthy dose of scepticism. So when I listened to Jonathan Ross, the founder of the multi-billion dollar AI chip company Groq, I was prepared for more of the same.
He paints a compelling picture, arguing that the future of AI hinges on one thing — speed.
"Once you get speed you can never go back," he says, comparing the leap to the one we all remember from dial-up to broadband. His company, Groq, is dedicated to this single idea:
Building incredibly fast, low-latency chips (called Language Processing Units or LPUs) that also happen to be more energy-efficient.
Yet, beneath the technical brilliance and the venture capital backing lies a deeper set of questions.
How does this quest for speed intersect with the things that truly matter, like fairness, safety, and our sense of purpose?
Ross offers some compelling answers, but as always, no one gets a free pass.
Democratising AI or concentrating power?
One of the most powerful stories in tech is the founder’s origin myth. For Jonathan Ross, it began at Google, where he started the project that would become Google’s TPU, its specialised AI chip.
He saw firsthand how this new hardware gave Google a huge advantage making its AI "10 times faster than any GPU" at the time.
This experience, he claims, was the catalyst for Groq. He was motivated by a desire to ensure everyone gets access to this transformative power, preventing it from being concentrated in the hands of a few wealthy corporations. It’s a noble goal.
The prohibitive cost of AI is something I see constantly as a business owner; companies with brilliant ideas find themselves priced out, with cloud budgets starting to rival hiring budgets. For many, getting the necessary GPUs involves wait times of up to a year.
Ross’s stated mission is to drive the cost of compute to zero. To this end, the Groq Cloud platform has attracted over a million developers, offering free access for smaller-scale use, much like how a utility provides a baseline amount of electricity.
This is a genuinely positive step towards levelling the playing field.
However, this is where the picture gets more complex. To achieve global scale, Groq has partnered with large entities, including a $1.5 billion deal to deploy nearly 20,000 chips in Saudi Arabia. This is where we must pause and ask...
Can a technology be truly democratised when its expansion is funded by and deployed within nation-states that have their own complex political and human rights issues?
It highlights the difficult compromises that often accompany the mission to marry purpose with profit.
The double-edged sword of openness and safety
Access is only one part of the equation; what we have access to is just as important. Ross points to the rise of powerful open-source models from China, like DeepSeek, as a "game changer moment" that has commoditised high-quality AI.
He believes that in the long run, open always wins against closed, proprietary systems.
I agree that openness can be a powerful force for good, but Ross himself raises two significant concerns that we cannot afford to ignore.
1. Data sovereignty and trust
The first issue is data sovereignty.
When you send a query to a service based in China, he warns, there is no guarantee the company can refuse a data request from the Chinese Communist Party (CCP).
Your questions could effectively be "sending that straight to the CCP". This is a reminder that the infrastructure behind our AI interactions matters immensely.
Groq, as a US-based company, counters this by deleting all user queries and retaining no data; a crucial step for building trust.
2. Intentional bias and censorship
The second, more insidious threat is intentional bias.
It’s well-documented that some Chinese models are trained to censor information or respond in a biased way to sensitive topics like Tiananmen Square or the treatment of the Uyghurs.
The deeper fear, Ross notes, is the potential for this bias to be weaponised to influence opinions or even elections.
However, his proposed solution feels a little too convenient. He suggests that as models become more intelligent, they naturally act with more subtlety, nuance, and creativity, which in turn makes building safety guardrails "easier."
Is it really that simple? Can raw intelligence truly overcome the deeply ingrained human biases present in its training data, or the deliberate political agendas of its creators?
I’m not so sure. It feels like a techno-optimistic answer to a profoundly human and political problem.
Redefining work, purpose, and our place in the world
This brings us to the conversation that, as a parent, occupies so much of my thinking:
What will our children’s lives look like? What jobs will they have? How will they find purpose?
Contrary to the widespread fear of mass job displacement, Ross predicts AI will cause enormous labour shortages. His reasoning is threefold.
First, as AI improves services like customer support, demand for those services will rise, requiring more human oversight.
Second, new job categories will emerge, like the prompt engineer, a role he believes will eventually evolve to the point where creating a business can be done purely through language, unlocking the entrepreneurial potential of billions of people.
And third, in a deflationary economy where AI makes goods and services cheap, he predicts many will simply opt out of work to pursue other interests.
It’s a utopian vision, and the idea of my son’s generation being free to pursue creative passions instead of toiling for a wage is incredibly appealing.
Yet, I wonder if this opting out will be a luxury afforded only to a few, potentially creating an even greater divide between those who can live a life of leisure and those who must continue to work, perhaps in jobs managed and monitored by AI.
This is the challenge that Ross rightly identifies as the most significant. Groq's core mission is "to preserve human agency in the age of AI".
He’s less worried about AI taking over and more concerned about how we will find purpose and motivation when we don't have to work.
Marrying purpose and profit
Jonathan Ross is asking many of the right questions. His focus on speed is not just a technical pursuit. It’s tied to a vision of democratised access, improved AI quality, and ultimately, a world where human agency is preserved.
The technology Groq is building is undeniably impressive, and its commitment to user privacy by deleting queries is a standard others should follow. But the path from a founder’s vision to a global reality is paved with compromises and complexities.
Marrying purpose and profit is a delicate balancing act, and holding power to account requires us to look closely at who a technology empowers, who it overlooks, and whose values are embedded within it.
The gospel of speed is compelling, but our focus must remain on the direction we are heading. A faster future is coming. To ensure it’s also a fairer, safer, and more purposeful one is the work that falls to all of us.
Would you feel comfortable handing over important decisions to an AI, no matter how fast or efficient it is?
Hi, I'm Miriam - an AI ethics writer, analyst and strategic SEO consultant.
I break down the big topics in AI ethics, so we can all understand what's at stake. And as a consultant, I help businesses build resilient, human-first SEO strategies for the age of AI.
If you enjoy these articles, the best way to support my work is with a free or paid subscription. It keeps this publication independent and ad-free, and your support means the world. 🖤