Anthropic announced its Economic Futures Programme this week. This programme offers both promise and peril in tech initiatives in equal measure. But first, credit where it's due.

Anthropic acknowledges that "publishing data alone is not enough" and that we need to "discuss potential solutions for managing AI's impact."
This represents genuine progress from the usual Silicon Valley playbook of disrupting first, asking questions later. Yet, I can't help but look at what's present alongside what's missing.
The complexity of getting it right
The programme's research grants include up to £40,000 ($50,000) for studying "AI's effects on labour, productivity, and value creation". This reveals an interesting tension.
Yes, "value creation" risks privileging shareholder metrics over human flourishing. But I also recognise that starting with economic impacts makes strategic sense.
Policymakers respond to economic data. Businesses need financial frameworks to justify ethical choices. Sometimes pragmatism opens doors that idealism cannot.
Still, I worry about whose questions get prioritised. Will researchers investigate the Manchester retail worker adapting to automated systems? The teacher in Birmingham is integrating AI tools whilst preserving a human connection?
Or will grant structures inadvertently favour studies that confirm AI's productivity benefits? The challenge is structural bias; research frameworks shape findings before the first data point is collected.
Forums, voices, and the art of inclusion
The programme promises policy forums bringing together "researchers, policymakers, and practitioners." Having sat in similar (board)rooms, I know how easily good intentions can reproduce existing hierarchies.
Yet, I also appreciate the genuine difficulty here. How do you create forums that are both inclusive and effective?
Too narrow, and you miss crucial perspectives. Too broad, and nothing actionable emerges.
Perhaps the answer lies in structured rotation, ensuring today's "practitioners" include teachers, drivers, and care workers, not just tech employees. The Economic Futures Symposia could pioneer new models of democratic participation.
But this requires intentional design, not just open invitations.
The measurement paradox
Anthropic's Economic Index will track AI's economic usage longitudinally. My years optimising for search algorithms taught me that metrics shape behaviour as much as they measure it.
There's risk in reducing human experience to data points. The freelance writer whose income streams dry up, the factory worker facing their third retraining; these stories resist easy quantification.
Still, without systematic measurement, anecdotes dominate policy discussions. The index could provide crucial evidence for those advocating worker protections or retraining programmes.
The question becomes how to measure wisely, combining quantitative tracking with qualitative research that captures what numbers cannot.
Independence and interdependence
The programme offers API credits to "independent researchers", raising questions about true independence when studying systems you depend upon. This tension seems inherent to our moment; the concentration of AI capabilities makes some degree of interdependence unavoidable.
Rather than demanding impossible independence, perhaps we need transparent acknowledgement of relationships and robust protections for critical findings.
Can Anthropic commit to publishing all funded research, including unfavourable results? Will they create mechanisms ensuring access continues regardless of findings?
These structural safeguards matter more than claims of independence.
Towards constructive engagement
What encourages me most is Anthropic's recognition that "society's response to AI is not predetermined". A statement which acknowledges agency in a discourse often dominated by inevitability.
The programme could pioneer new models of participatory governance if it embraces radical experiments in inclusion.
Imagine rotating citizen panels reviewing research priorities. Structured dialogues between workers experiencing AI's impacts and developers creating these systems. Funding for artists and philosophers alongside economists.
If we're willing to move beyond traditional expertise hierarchies, these could become practical possibilities.
With children like my six-year-old growing up in a world where AI will shape his opportunities in ways I can barely imagine, programmes like this matter. Perfect solutions don't exist, but thoughtful attempts at addressing AI's societal impacts deserve engagement, not just criticism.
How do we balance pragmatism with principles in shaping AI's economic impacts? What would constructive participation look like in your community?
Hi, I'm Miriam - an AI ethics writer, analyst and strategic SEO consultant.
I break down the big topics in AI ethics, so we can all understand what's at stake. And as a consultant, I help businesses build resilient, human-first SEO strategiesfor the age of AI.
If you enjoy these articles, the best way to support my work is with a free or paid subscription. It keeps this publication independent and ad-free, and your support means the world. 🖤