There's a pattern in left politics that's both understandable and ultimately self-defeating: we're brilliant at articulating what we oppose, but often struggle to define what we're for. We can catalog the horrors of capitalism and empire in exhaustive detail, but when pressed on alternatives, we sometimes retreat into vague aspirations or defensive disclaimers. This is a fatal weakness. For every "no" to the systems crushing us, we must articulate a compelling "yes" to what we're building instead.
This applies urgently to artificial intelligence—perhaps the most transformative technology of our generation. And here's where much of the left has gotten it wrong: by treating AI purely as a threat, we've ceded the entire conversation to corporate futurists and Silicon Valley ideologues. We've failed to articulate what AI could mean from a socialist, working-class perspective. That's a massive strategic error.
Let me be clear about what I'm celebrating: AI has fundamentally democratized access to capabilities that were previously exclusive to the wealthy. For the first time in history, working people can have something approximating a full-time personal assistant—available 24/7, capable of helping with research, writing, coding, creative projects, education, planning, and countless other tasks. This was once available only to executives, academics with grant funding, or the independently wealthy. Now it's accessible to anyone with an internet connection.
This is a profound equalizer. Consider what this means practically: A single parent working two jobs can now get help drafting a letter to their landlord, understanding their legal rights, or tutoring their kid in math. An independent journalist without institutional backing can do research that previously required a staff. A worker trying to organize their workplace can access strategic advice and template documents. An artist can integrate the tool into their existing workflows. A student from an under-resourced school can access educational support their district can't afford. Activists can use it for all sorts of purposes. None of this suggests we cede our critical thinking or creative capacities entirely to AI. We still need to be in the drivers seat, as an editor, a director and critical thinker. AI can make mistakes, just like humans or Wikipedia. We have to fact check, challenge our assumptions and use the tool wisely. For instance, AI can be used in a lazy way by students to write an entire essays they don't even read or edit, or it can be a tool to help learn, write and expand our critical thinking and writing skills. Rather than suggest people never use AI, we should encourage AI literacy and best practices, the same way we teach media literacy, responsible journalism and critical thinking.
When I compare this moment to previous technological revolutions, I think about how radicals once celebrated the printing press. Before Gutenberg, knowledge production was monopolized by institutions—monasteries, universities, state bureaucracies. The printing press didn't instantly create liberation, but it made possible the spread of ideas that ultimately challenged concentrated power. Similarly, the internet was initially celebrated by digital utopians as inherently democratizing—and while that optimism proved naïve about corporate capture, the internet genuinely did enable new forms of organizing, knowledge-sharing, and movement coordination that weren't possible before.
AI represents a similar inflection point. Yes, it comes embedded in capitalist relations—trained on stolen labor, deployed to maximize profit, concentrated in the hands of massive corporations. Yes, it's being weaponized for surveillance, union-busting, and the displacement of labor. All of that is true and must be fought.
But—and this is crucial—the technology itself also contains genuinely liberatory potential that we abandon at our peril. Our job isn't to reject AI wholesale, but to fight for its democratization while resisting its weaponization.
This means several concrete things:
First, defending accessibility. The corporate model wants to put advanced AI capabilities behind expensive paywalls, reserving the best tools for those who can pay premium subscriptions. A socialist approach demands that basic AI capabilities remain freely or cheaply accessible, treating them as essential infrastructure like libraries or public education.
Second, fighting for transparency and accountability. We need open-source AI models, public oversight of training data, and democratic governance over how these systems are developed and deployed. The alternative is a future where a handful of corporations control the primary cognitive infrastructure of society.
Third, resisting AI's use as a weapon against workers. This means opposing AI-powered surveillance systems, algorithmic management that strips workers of autonomy, and the use of automation specifically to weaken labor power rather than reduce necessary toil. The goal shouldn't be to prevent automation, but to ensure its benefits flow to workers rather than shareholders.
Fourth, imagining AI as a tool for collective liberation. What if AI could help coordinate cooperative enterprises? Facilitate participatory budgeting at scale? Make direct democracy more feasible in complex societies? Assist mutual aid networks in resource allocation? Help communities understand and resist gentrification or environmental racism? These applications exist, but they require intentional development.
The tragedy is that much progressive discourse has written off AI entirely, treating it as inherently oppressive rather than as contested terrain. This hands the entire future of the technology to those who see it purely as a profit engine or mechanism of control. It's the same mistake we made with the internet—allowing corporate capture to proceed largely unopposed because we were too busy lamenting the technology itself rather than fighting for its democratic governance.
I'm not a technological determinist. AI won't automatically liberate anyone. Like the printing press, like the internet, like every major technology, it's a terrain of struggle. The question isn't whether AI is "good" or "bad" in some abstract sense—it's who controls it, who benefits from it, and what purposes it serves.
Right now, working people are already using AI to augment their capabilities in ways that genuinely improve their lives and expand their agency. Telling them this is somehow betraying left principles is both condescending and strategically foolish. Instead, we should be helping them understand both the technology's potential and its dangers, while building movements that can steer its development toward justice rather than exploitation.
This is what it means to define our "yes." Not naive techno-optimism, but a clear-eyed recognition that this technology, like all technologies, can serve different masters. Our task is ensuring it serves the many rather than the few—and that requires engagement, not rejection.
The printing press enabled both revolutionary pamphlets and propaganda. The internet enabled both grassroots organizing and surveillance capitalism. AI will enable both liberation and oppression. Which tendency wins depends on the political struggles we wage right now, not on the inherent nature of the technology itself.
So yes, I celebrate that working people now have access to capabilities once reserved for elites. And yes, I'm committed to fighting against AI's use as a weapon of exploitation. These aren't contradictory positions—they're two sides of the same struggle for democratic control over the tools shaping our collective future.
That's the "yes" we need to articulate: AI as a commons, governed democratically, deployed for human flourishing rather than profit maximization. It's a fight worth having, and one we can't afford to abandon.
Tim Hjersted is the director and co-founder of Films For Action, a library dedicated to the people building a more free, regenerative and democratic society.
Subscribe on Substack for updates. You can contact the author here.