
The rise of generative artificial intelligence (AI) has triggered one of the most polarized cultural debates of the digital age, one that pits innovation against artistic integrity. A new study by Nataliia Laba of the University of Groningen explores this ideological clash through the lens of online discourse, revealing how the public imagination around visual AI art is shaping the future of creativity itself.
Published in AI & Society, the study titled “Whose imagination? Conflicting narratives and sociotechnical futures of visual generative AI,” the study dissects nearly 4,000 YouTube comments responding to the viral video “AI vs Artists – The Biggest Art Heist in History.”
Laba’s research examines how users interpret visual generative AI not just as a technology but as a social, cultural, and moral phenomenon. Through data-driven and thematic analysis, the study maps a deep ideological split, between those who see AI as a creative partner and those who view it as a system of exploitation that erodes artistic authenticity.
AI as creative liberation or cultural threat
On one side, supporters frame AI as a revolutionary tool that democratizes artmaking. They argue that visual generative models such as Midjourney, DALL·E, and Stable Diffusion empower individuals who lack traditional artistic training to express ideas with unprecedented ease. For them, AI marks the next logical step in creative evolution, expanding human imagination rather than replacing it.
Opponents, however, construct a counter-narrative that frames AI as a cultural and economic threat. To these critics, the term “AI-generated art” conceals a system of large-scale data extraction that exploits human creativity without consent or compensation. Laba’s study finds that these users frequently describe AI as a form of “industrialized mimicry,” warning that mass adoption could devalue original art, destabilize creative professions, and erode public appreciation for human craftsmanship.
This divide is reflected in how people anthropomorphize AI. Many supporters liken model training to human learning or inspiration—arguing that just as artists study others’ work, AI learns from visual patterns. In contrast, critics reject this analogy, viewing AI’s data scraping as digital theft rather than learning. These two metaphors, AI as “student” versus AI as “thief”, anchor opposing moral frameworks that dominate online discourse.
Laba’s quantitative analysis reveals a subtle majority leaning toward the “tool” narrative, suggesting that optimism around AI’s potential for creativity still outweighs fear. However, the intensity and coherence of the opposing camp signal that ethical resistance is not fringe but foundational to how society understands technological change.
Sociotechnical imaginaries and the battle for cultural futures
The study situates this online conflict within the broader theory of sociotechnical imaginaries, collective visions of how technology should serve society. Laba argues that YouTube discussions about generative AI function as arenas where these imaginaries compete for legitimacy.
Proponents align with Silicon Valley’s innovation-driven imaginary, which celebrates disruption, openness, and the democratization of knowledge. This worldview frames AI as a neutral or even benevolent force that expands creative participation and blurs the boundaries between artist and audience.
In contrast, artist-led communities articulate a counter-imaginary rooted in justice, authorship, and cultural stewardship. They challenge the notion that technological progress is inherently good, arguing instead that innovation must be accompanied by accountability. Their discourse emphasizes data ethics, how AI models are trained, who owns the training material, and who profits from the outputs.
Laba highlights how this contest between imaginaries is not just philosophical but materially consequential. The way society resolves it will influence future regulation, intellectual property law, and the norms governing AI in creative industries. Public opinion, she suggests, acts as a “social shaping force,” steering the direction of both policy and corporate strategy.
Her network analysis of comment clusters shows that discussions about AI ethics and ownership are not isolated but interwoven with broader anxieties about automation, labor, and digital capitalism. Generative AI becomes a proxy for larger concerns about who controls creative labor and how cultural value is distributed in an algorithmic economy.
Ethics, consent, and the path toward responsible AI art
The final section of Laba’s research moves beyond polarization to ask what a more equitable and sustainable framework for AI-assisted art might look like. While the “AI as tool” narrative dominates, her analysis reveals a growing demand for ethical visual AI, systems that respect consent, attribution, and fair compensation.
Many commenters propose mechanisms such as opt-out datasets, creator licensing, and algorithmic transparency to address exploitative practices. Others suggest that AI companies should share profits with the artists whose work trains their models, or at minimum disclose the provenance of the images used. These suggestions reflect a public appetite for a middle ground, where creativity can flourish without sacrificing fairness or integrity.
However, Laba cautions that such reforms require more than technical fixes. They demand cultural recognition that art is not merely data but a form of human expression embedded in history, labor, and emotion. The study argues that responsible AI must evolve through inclusive dialogue between technologists, policymakers, and creative communities rather than being dictated by corporate interests alone.