
Artificial Intelligence is reshaping global industries, but the way AI models are built, trained, and commercialized has raised concerns about ethical development, fairness, and equitable access. The rapid rise of generative AI, particularly foundation models, has been fueled by massive data extraction from publicly available online sources – often without consent or fair compensation to content creators. This pattern mirrors the exploitative practices of surveillance capitalism.
However, a new approach is emerging: building AI models based on openly contributed and collaboratively curated content. A recent study, “Can Generative AI Be Egalitarian?” by Philip Feldman, James R. Foulds, and Shimei Pan, published on arXiv, explores the potential of an egalitarian AI development model inspired by the success of Wikipedia and the Free and Open-Source Software (FOSS) movement.
The extractive nature of current AI models
Generative AI, including models like ChatGPT and Midjourney, relies heavily on massive datasets harvested from the internet. The study argues that this model intensifies data extraction practices, mirroring the dynamics of surveillance capitalism, where corporations profit from user-generated content without reciprocal benefits for the original creators. The for-profit AI ecosystem is largely closed-source, meaning that only a select group of tech companies control the data, training processes, and deployment of these models. As AI progresses, these companies consolidate control over generative capabilities, potentially creating monopolies over digital creativity, knowledge production, and information dissemination.
A particularly concerning consequence is the misalignment between AI companies’ profit motives and societal well-being. Since AI models are optimized for commercial gain, they may prioritize engagement over accuracy, spreading misinformation, reinforcing existing biases, and favoring privileged perspectives. The paper highlights how this approach undermines transparency, public trust, and fair access to AI technology. Without intervention, AI’s evolution will continue favoring centralized corporate interests rather than benefiting the broader society.
The case for a collaborative, egalitarian AI model
The study proposes an alternative: a decentralized, community-driven AI development model that follows the principles of open-source collaboration. Inspired by the Wikipedia model, this approach would involve voluntarily contributed, ethically sourced datasets that form the foundation for AI training. Rather than relying on corporate-controlled data scraping, this system would incentivize contributors to share content under open licenses, fostering inclusivity and diversity in training materials.
Such a model could help address some of AI’s most pressing ethical challenges, including bias reduction, transparency, and data sovereignty. The study suggests that a collaborative AI ecosystem would result in models that are more responsive to user needs, less prone to algorithmic bias, and aligned with ethical standards. By prioritizing public good over profit, this approach could democratize AI development, ensuring that AI tools remain accessible and representative of diverse global perspectives.
However, challenges remain. The authors acknowledge that curating high-quality, diverse, and representative datasets under an egalitarian AI model is a complex task. Ensuring quality control, preventing misinformation, and maintaining neutrality would require governance structures similar to those in place for Wikipedia and open-source software projects. The study also highlights the computational costs associated with training large AI models, which historically have been concentrated in the hands of tech giants with vast infrastructure resources.
The role of open-source and decentralized AI governance
The study explores how AI governance could be reshaped through decentralized participation. Currently, AI decision-making is dominated by corporate entities that dictate how models are trained and how their outputs are regulated. In contrast, an egalitarian AI framework would involve a transparent, community-driven governance system where contributors play an active role in shaping the evolution of AI models.
One proposed solution is a blockchain-based governance structure, where content contributors and curators vote on the inclusion and weighting of datasets in AI training processes. By distributing decision-making authority, this model ensures that AI systems evolve based on collective input rather than unilateral corporate control. Additionally, researchers propose that AI ethics boards, composed of diverse global stakeholders, oversee the development process to mitigate bias and maintain accountability.
Furthermore, integrating federated learning techniques – where AI models are trained on local devices without centralizing all data – could enable AI training while preserving privacy. Such methods could also help reduce dependency on massive centralized data centers, making AI more sustainable and energy-efficient.
The future of egalitarian AI: Challenges and opportunities
While the study advocates for a shift toward open, inclusive AI development, it acknowledges key barriers to implementation. These include:
- Scalability: Ensuring an egalitarian AI model can compete with corporate-backed alternatives in terms of performance and reliability.
- Quality Control: Preventing bias, misinformation, and manipulation in a decentralized training dataset.
- Economic Viability: Establishing sustainable funding models for maintaining an open AI ecosystem.
Despite these challenges, the paper argues that the potential benefits of an egalitarian AI system far outweigh the difficulties. By aligning AI development with public interest, ethical transparency, and community participation, the future of AI could be steered toward a model that benefits all, not just a select few corporations. With increasing scrutiny on AI ethics and accountability, the transition to more transparent, community-led AI may not just be desirable – it may be essential for ensuring AI serves humanity equitably.
The study concludes that AI’s future is at a crossroads: it can continue along the current extractive, profit-driven path, or pivot toward a more collaborative, democratic, and fair system. The choice we make today will determine whether AI remains a tool for corporate dominance or becomes a truly egalitarian technology that empowers all of society.