AI Made Friendly HERE

Ensuring equitable technological transitions: AI use in the workforce

Much of the public conversation about generative AI and work focuses narrowly on job loss or productivity gains. But research by Professor of Sociology Chris Benner draws on lessons from past technological transitions and current work on AI to inform how innovation can improve our work now and in the future. 

For industries like food service, agriculture, and personal service, several critical pieces are missing, making it difficult for us to understand exactly how to leverage technology for good and involve workers in implementing AI on the job. We interviewed Benner to learn more about how AI could be integrated into the workplace and some of the opportunities we have to equitably guide its use before it causes harm. This interview has been edited for length and clarity.

Chris Benner, Professor of Sociology

Q: Many people are concerned about AI replacing jobs. How do you see AI being designed and used in the workforce?

A: It is unlikely that we’ll see a rapid loss of jobs in any single occupational category. Most new technologies, like in previous rounds of rapid technological change, are changing tasks, not complete jobs, allowing job activities to shift over time. And technologies don’t determine outcomes on their own, but institutional choices, business models, policies, governance, and power relations do. 

The key question is who gets to shape AI use in the workforce. The same AI tools can produce very different outcomes depending on who is involved in decisions about design, deployment, and governance. For example, we see instances where AI can either deskill work, intensify surveillance, and hollow out jobs, or it can augment workers, reduce drudgery, and improve job quality. 

The most immediate risks aren’t mass layoffs, but algorithmic management and electronic monitoring, increased work intensity and loss of autonomy, racialized and gendered bias in scheduling, evaluation, and discipline. These dynamics echo earlier waves of automation, but AI scales and obscures managerial power in new ways.

Q: AI is being implemented differently across many sectors. Do you see any patterns, challenges, or new opportunities emerging?

A: I think we will see automation in service and blue-collar work for many tasks, leading to shifts in activities and responsibilities within existing jobs. However, since Large Language Models (LLMs) affect writing, analysis, interpretation, and communication, many of the current changes will focus on professional, technical, and white-collar occupations. These are precisely the occupations where people have greater flexibility in their work responsibilities and are thus more likely to adjust to changing work activities. 

However, we are overlooking AI’s potential to make invisible skills visible, especially in care, education, and communication work. Some of the most economically and socially important work — childcare, early education, elder care, health support, teaching, mentoring — cannot easily be automated. 

The problem is not that these jobs lack skill; it’s that we are bad at recognizing, measuring, and rewarding quality in relational and interactive work. Ironically, generative AI could help here, by helping us better understand communication, pedagogy, and care practices, by supporting training, feedback, and professional development, and by making tacit knowledge more visible without replacing human judgment.

Q: Your research explores some of the big-picture changes that happen during labor and social transitions. How do you think AI can reshape our social structures related to work?

A: AI underscores the need to rethink social supports tied to employment. This transition could serve as a catalyst to question why key social supports are linked to one’s job, particularly health insurance, but also retirement benefits, access to training, income and security. 

AI systems are built on collective knowledge — scraped from writings, images and videos of millions of workers, writers, teachers, artists, and caregivers. If AI produces broad productivity gains from that collective inheritance, it raises a basic question of fairness: Why don’t we treat some of those gains as a shared social return? This opens the door to ideas like an AI universal dividend, stronger social wages, and universal access to healthcare and lifelong learning, decoupled from employment.

Q: How can we ensure AI is used for good in the workforce?

A: We need to focus on worker-centered innovation and ask ourselves what AI would look like if it were designed to make work better. This could include using AI to support training, mentoring, and skill development, supporting better scheduling, safety, and career pathways, and reducing administrative burden so workers can focus on relational or creative work. 

However, to achieve these types of outcomes requires workers’ voices, industry standards, and public-interest governance, not just market adoption. It means treating AI governance as a public policy issue, not just a technical or corporate one.

Right now, we should focus on expanding worker participation in AI design and deployment decisions and updating labor standards to address algorithmic management and surveillance. We can also invest in learning infrastructure, not just reskilling individuals — including community-based and employer-embedded learning. 

AI will not determine the future of work on its own. The real question is whether we treat this as another extractive technological transition — or as an opportunity to rebuild institutions, norms, and governance around work in ways that center dignity, equity, and learning.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird