
Google’s massive search overhaul announced at Google I/O 2025 has undoubtedly delivered an earthquake to SEOs worldwide and, probably, to anyone thinking about how we find information online.
But are we truly grasping the profound implications of this shift? I’d argue we’re sleepwalking into a future where information access is fundamentally reshaped, and not necessarily for the better.
As someone who’s spent fourteen years in SEO and digital marketing, I find myself asking questions that go far beyond traffic drops and ranking fluctuations. When AI Mode and AI Overviews become the primary gatekeepers of information for millions, it truly makes you pause and think… Who genuinely benefits?
The Algorithmic Bias Nightmare
Are we sleepwalking into discrimination? Have we properly considered howAI ecosystems like Gemini might systematically exclude certain voices? My studies at Helsinki and Lund focused heavily on how LLMs inherit training data biases, and now I’m watching Google implement exactly these systems as information arbiters.
When AI Overviews curate responses, whose perspectives get prioritised? Are we creating a system where marginalised communities whose voices already struggle for visibility, now become even more invisible? I keep thinking about my reading on algorithmic bias detection. If we can’t even properly audit these systems for discriminatory outcomes, how can we claim they’re serving diverse communities equitably?
And even more troublingly, are we inadvertently creating what I call “bias at scale”? Essentially a scenario where a single AI system’s limitations get amplified across billions of search interactions?
What really concerns me is whether we’re moving from a flawed but pluralistic information ecosystem to something far more homogeneous. Have we thoroughly examined whether these systems might systematically favour certain types of knowledge whilst marginalising others? This lack of scrutiny leads directly to critical questions around transparency.
Who’s Watching the Information Watchers?
This is where my current studies in Operationalising Ethics in AI become particularly relevant. We talk endlessly about transparency in AI systems, but what does that actually mean when those systems control access to human knowledge?

At least with traditional search results, savvy users could reverse-engineer ranking factors. Now we’re dealing with complete algorithmic opacity. But if users can’t understand why certain information appears in AI Overviews whilst other perspectives vanish, can we genuinely call this democratic access to knowledge?
I can’t help but wonder whether the European Union’s AI Act transparency requirements will prove adequate for this use case. When AI systems become primary information gatekeepers, shouldn’t users have some visibility into how those curatorial decisions are made? The implications for user privacy are equally pressing.
What’s the True Cost of Personalised Truth?
Here’s something else that’s really troubling me. AI Mode requires unprecedented user profiling to generate those personalised responses. But have we properly considered what this means for privacy at scale?

As part of my studies, I learn about privacy-preserving AI implementations like Anthropic’s Cleo. Yet Google’s approach seems to prioritise functionality over fundamental rights. Every query now becomes a data point for increasingly sophisticated behavioural modelling.
But here’s the deeper question. When AI systems know so much about us that they can predict what information we want to see, are we still encountering genuine knowledge discovery? Or are we trapped in algorithmically constructed comfort zones?
What worries me most is whether we’re creating systems that tell us what we want to hear rather than what we need to know. Have we considered the democratic implications of personalised information curation? I believe these challenges bring us to the heart of AI governance.
Who Governs the Governors?
The principles of responsible AI governance are screaming out for us to ask the hard questions. Have all the potential downsides been thoroughly looked at? Are diverse viewpoints truly being protected? Are these systems really going to serve the public, or mainly just commercial agendas?
This is where my fourteen-year experience in SEO and digital crosses over with my academic studies in AI ethics. I’ve seen how algorithmic changes can devastate entire business sectors overnight. But now we’re talking about something far more fundamental, i.e. the potential reshaping of how humanity accesses knowledge.
I can’t see legislators staying quiet on this for long, especially with ongoing antitrust cases in the US and EU, and their scrutiny already intensifying. And yet I’m not sure whether our current regulatory frameworks are even equipped to handle information gatekeeping at this scale. What mechanisms do we have to ensure these systems work equitably for all stakeholders?
And more fundamentally, who gets to decide what “equitable” even means in this context? Without proper governance, the societal impact could be devastating.
Are We Accidentally Killing Democracy?
This is where I find myself asking the most uncomfortable questions. Traditional search, for all its flaws, maintained some element of information serendipity. Users could stumble across unexpected perspectives, follow intellectual (or silly) breadcrumb trails, and engage with challenging, and at times polarising viewpoints.
But what happens when AI systems optimise for user satisfaction and engagement? Are we creating what I call “frictionless echo chambers”? Essentially, environments where users never encounter ideas that challenge their existing beliefs.
My studies in operationalising AI ethics constantly return to this question. How do we preserve democratic discourse when algorithmic systems mediate our access to information? Are we inadvertently engineering ignorance by eliminating intellectual discomfort?
And here’s the question that really, truly haunts me. If these systems become ubiquitous, what happens to critical thinking skills? When AI provides pre-digested answers to complex questions, do we lose the cognitive muscles needed for independent analysis?
What Would Responsible AI Implementation Actually Look Like?
Instead of just accepting these changes as inevitable technological progress, I genuinely believe we need a proper, meaningful conversation about how we govern them. We need mechanisms that ensure these incredibly powerful AI systems work for all stakeholders.
It’s one thing to say we want “fair and transparent” AI systems. It’s quite another to build governance mechanisms that actually deliver on those promises.
I honestly think that the questions we ask now will shape how humanity accesses knowledge for generations. Are we asking the right ones? And more crucially, are we prepared to act on the answers? What do you think?
You can read “100 things Google announced at Google I/O” here: https://blog.google/technology/ai/google-io-2025-all-our-announcements/
If you find value in these explorations of AI, consider a free subscription to get new posts directly in your inbox. All my main articles are free for everyone to read.
Becoming a paid subscriber is the best way to support this work. It keeps the publication independent and ad-free, and gives you access to community features like comments and discussion threads. Your support means the world. 🖤
By day, I work as a freelance SEO and content manager. If your business needs specialist guidance, you can find out more on my website.
I also partner with publications and brands on freelance writing projects. If you're looking for a writer who can demystify complex topics in AI and technology, feel free to reach out here on Substack or connect with me on LinkedIn.