AI Made Friendly HERE

Participatory AI design brings abolitionist ethics to artificial intelligence

A new wave of AI research is shifting away from surveillance and control toward community empowerment. Researchers have introduced a groundbreaking framework that places artificial intelligence in direct service of social justice movements.

Their study, “AI for Abolition? A Participatory Design Approach,” published in the Workshops at the Fourth International Conference on Hybrid Human-Artificial Intelligence (HHAI-WS 2025) in Pisa, Italy, presents a participatory design model for developing AI systems grounded in abolitionist ethics and restorative justice values.

Reframing artificial intelligence for abolitionist practice

The authors aim to answer a critical question: Can AI, a technology long associated with policing, bias, and systemic inequality, be reimagined to support abolitionist and restorative justice work? For decades, artificial intelligence has been deployed in systems that reinforce harm, from predictive policing and facial recognition to biased language models that replicate social hierarchies. The authors challenge this trajectory by proposing that AI can instead be mobilized as a tool for liberation, designed collaboratively with those directly affected by carceral systems.

At the heart of the study lies the concept of participatory design, a process that brings end users, in this case restorative and transformative justice (RJ/TJ) practitioners, into the creation of technological systems. Rather than designing for communities, the researchers designed with them. This methodology stands in sharp contrast to conventional AI development, which is often driven by corporate interests and detached from the lived experiences of marginalized groups.

The team’s approach uses Participatory Action Research (PAR) and data feminist frameworks, ensuring that justice practitioners co-defined the goals, ethical boundaries, and success metrics of the system. This design philosophy rejects the top-down logic that dominates AI research, where metrics like efficiency or accuracy often overshadow human dignity and care. Instead, the project seeks to embed abolitionist ethics directly into the algorithmic fabric of emerging large language models (LLMs).

Through interviews and co-creation sessions with nine experienced RJ/TJ practitioners across the United States, the researchers explored how AI could be ethically introduced into non-carceral justice spaces. Participants were encouraged to imagine and critique AI’s potential roles, from administrative assistance to reflective support, while identifying clear ethical limits to prevent harm or surveillance.

Building AI that reflects justice, not punishment

The study develops an evaluation framework for measuring how well AI systems align with abolitionist values. The researchers identified six guiding principles rooted in the philosophy of transformative justice: rejecting violence as a solution, understanding harm as contextual, recognizing community interdependence, emphasizing accountability as mutual responsibility, embodying the change sought, and fostering creativity as resistance to institutional control.

These values were operationalized into practical assessment criteria, enabling the team to test and refine AI models against them. The researchers then analyzed how large language models such as GPT-4o, Claude 3.7 Sonnet, Gemini 2.5 Pro, and Change Agent perform when evaluated through this ethical lens. Rather than focusing on technical benchmarks like speed or accuracy, the evaluation emphasized ethical coherence, nonviolence in language, contextual sensitivity, and inclusivity.

Through participatory workshops, the research team identified several potential AI use cases that complement, rather than replace, human-led restorative justice work. These include automating administrative tasks, such as scheduling and record keeping; creating accessible educational resources about restorative practices; analyzing qualitative data from justice circles; and assisting practitioners in post-session reflection. Importantly, none of these applications involve AI directly interacting with affected individuals, an intentional ethical boundary established to prevent the replication of surveillance or coercion often embedded in traditional AI deployments.

By translating abolitionist ethics into measurable AI evaluation metrics, the study introduces a transformative concept known as “value-reflective AI.” This framework calls for systems that do not merely avoid harm but actively reflect the principles of equity, community care, and mutual accountability. The authors argue that value alignment cannot be achieved through technical fixes alone, it requires a fundamental redefinition of what constitutes intelligence, success, and justice in AI systems.

Toward participatory, ethical, and transformative AI design

The study extends beyond conceptual critique to outline a concrete roadmap for restructuring AI development. Its methodology merges participatory design with technical evaluation, offering a replicable model for future justice-oriented technology projects. The researchers propose a multi-phase process: community consultation, co-design, value articulation, AI evaluation, and iterative improvement based on participatory feedback.

In its next phase, the project will develop a Retrieval-Augmented Generation (RAG)–based model trained on abolitionist and restorative justice texts. This domain-specific dataset will enable AI systems to generate contextually grounded, ethically aligned responses informed by real-world justice practices. The team also plans to adapt the concept of Constitutional AI, a model alignment strategy that uses explicit moral frameworks to guide outputs, into an “Abolitionist Constitution,” embedding justice principles at the core of model governance.

The researchers’ participatory sessions revealed another critical insight: trust is as important as accuracy. Many practitioners expressed ambivalence toward AI, citing fears of misuse and data exploitation. By giving communities ownership over design and data governance, the project addresses these concerns directly, reframing AI as a collaborative partner rather than a distant authority. This human-centered co-design process ensures that technology development remains accountable to those it serves, not those who fund it.

The authors also highlight that participatory justice work is emotionally intensive, requiring spaces for reflection and self-regulation, areas where AI can play a supportive, rather than prescriptive, role. By offering tools that facilitate reflection and reduce administrative burden, AI can indirectly enhance human capacity for empathy and care within justice ecosystems.

Rather than optimizing control or productivity, AI should be optimized for healing, equity, and liberation, the study asserts. This shift requires more than technical refinement; it demands a cultural transformation in how researchers, policymakers, and communities envision the role of technology in society.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird