Research by Professor Chris Benner, a sociologist, examines how artificial intelligence (AI) is changing the workplace and what can be done to ensure these changes benefit workers. While much of the public debate about AI focuses on fears of job loss or hopes for increased productivity, Benner’s work draws on past technological transitions to offer a broader perspective.
Benner notes that rapid job loss in any single occupation due to AI is unlikely. Instead, he says new technologies typically change specific tasks within jobs rather than eliminating entire roles. “Most new technologies, like in previous rounds of rapid technological change, are changing tasks, not complete jobs, allowing job activities to shift over time. And technologies don’t determine outcomes on their own, but institutional choices, business models, policies, governance, and power relations do,” Benner said.
He emphasizes that the way AI impacts work depends largely on who makes decisions about its design and use. “The key question is who gets to shape AI use in the workforce. The same AI tools can produce very different outcomes depending on who is involved in decisions about design, deployment, and governance. For example, we see instances where AI can either deskill work, intensify surveillance, and hollow out jobs, or it can augment workers, reduce drudgery, and improve job quality,” he explained.
According to Benner, immediate risks from AI are less about mass layoffs and more about issues such as algorithmic management and electronic monitoring. These practices can increase work intensity and reduce worker autonomy while introducing bias into scheduling and evaluation processes. He points out that these concerns mirror earlier automation trends but notes that “AI scales and obscures managerial power in new ways.”
Benner also observes that while automation will likely affect service and blue-collar jobs by shifting tasks within existing roles, current advances—particularly those involving large language models—are most noticeable in professional and white-collar occupations. Workers in these fields often have more flexibility to adapt to changing responsibilities.
He argues that society tends to undervalue skills found in care work or education because they are difficult to measure or reward through traditional means. However, generative AI could help make these skills more visible: “Ironically, generative AI could help here…by supporting training, feedback, and professional development…and by making tacit knowledge more visible without replacing human judgment.”
On a broader level, Benner suggests that the rise of AI highlights the need to reconsider how social supports such as health insurance or retirement benefits are tied to employment status. He proposes considering shared social returns from productivity gains generated by collective knowledge embedded in AI systems: “If AI produces broad productivity gains from that collective inheritance…Why don’t we treat some of those gains as a shared social return? This opens the door to ideas like an AI universal dividend…”
To ensure positive outcomes from workplace AI adoption, Benner calls for worker-centered innovation: “We need to focus on worker-centered innovation and ask ourselves what AI would look like if it were designed to make work better.” He advocates for involving workers in decision-making around technology deployment; updating labor standards related to algorithmic management; investing in learning infrastructure; and treating governance of workplace technology as a matter of public policy rather than leaving it solely up to corporations.
“AI will not determine the future of work on its own,” Benner concludes. “The real question is whether we treat this as another extractive technological transition — or as an opportunity to rebuild institutions…in ways that center dignity, equity, and learning.”



