Brian Amorim Cabaco
Weikai Chen
Clara von Bismarck-Osten
Matthew Nibloe
Julian Limberg
David Hope
Martin Nybom
Jan Stuhler
Mattia Fochesato
Sam Bowles
Linda Wu
Tzu-Ting Yang
Thomas Piketty
Malka Guillot
Jonathan Goupille-Lebret
Bertrand Garbinti
Antoine Bozio
Hakki Yazici
Slavík Ctirad
Kina Özlem
Tilman Graff
Tilman Graff
Yuri Ostrovsky
Martin Munk
Anton Heil
Maitreesh Ghatak
Robin Burgess
Oriana Bandiera
Claire Balboni
Jonna Olsson
Richard Foltyn
Minjie Deng
Iiyana Kuziemko
Elisa Jácome
Juan Pablo Rud
Bridget Hofmann
Sumaiya Rahman
Martin Nybom
Stephen Machin
Hans van Kippersluis
Anne C. Gielen
Espen Bratberg
Jo Blanden
Adrian Adermon
Maximilian Hell
Robert Manduca
Robert Manduca
Marta Morazzoni
Aadesh Gupta
David Wengrow
Damian Phelan
Amanda Dahlstrand
Andrea Guariso
Erika Deserranno
Lukas Hensel
Stefano Caria
Vrinda Mittal
Ararat Gocmen
Clara Martínez-Toledano
Yves Steinebach
Breno Sampaio
Joana Naritomi
Diogo Britto
François Gerard
Filippo Pallotti
Heather Sarsons
Kristóf Madarász
Anna Becker
Lucas Conwell
Michela Carlana
Katja Seim
Joao Granja
Jason Sockin
Todd Schoellman
Paolo Martellini
UCL Policy Lab
Natalia Ramondo
Javier Cravino
Vanessa Alviarez
Hugo Reis
Pedro Carneiro
Raul Santaeulalia-Llopis
Diego Restuccia
Chaoran Chen
Brad J. Hershbein
Claudia Macaluso
Chen Yeh
Xuan Tam
Xin Tang
Marina M. Tavares
Adrian Peralta-Alva
Carlos Carillo-Tudela
Felix Koenig
Joze Sambt
Ronald Lee
James Sefton
David McCarthy
Bledi Taska
Carter Braxton
Alp Simsek

Three key takeaways from the INSEAD Workshop on AI and inequality

On a rainy Spring day in Paris, CEPR, the INSEAD Stone Centre, and the Stone Centre at UCL brought together firms and researchers for a fascinating workshop.

Spanning panel discussions with business leaders across creative, tech, consulting, and not-for-profit sectors, and presentations on technology adoption, reskilling, and AI powered news feeds, the event facilitated collaboration on issues at the junction of economics research and business practice.

Below, we pick out three key takeaways to provide a flavour of the workshop and its implications for inequality. As the event was billed as a frank, ‘off the record’ conversation for the business leaders in attendance, their comments below have been anonymised.

1. AI adoption is uneven, creating new fault lines of inequality within and between firms

A recurring theme across the workshop was the unevenness of AI adoption. Panellists described a landscape where some organisations have moved beyond asking whether to adopt AI and are now focused on empowering every employee to build with it, while others remain at an early stage, still grappling with basic AI literacy. One firm representative described a workplace where engineers proactively seek extra training, while senior leaders show low uptake of AI tools. A head of DEI at a global retail firm highlighted digital exclusion as a primary concern, noting that AI literacy across their company remains low.

This unevenness matters for inequality. One panellist argued that inequality will emerge where companies fail to empower their people to build and innovate with AI. Meanwhile, a creative firm leader warned that AI may not, as commonly claimed, lower barriers to entry. Instead, large technology firms may emerge as new gatekeepers, whose platforms will control access and prevent reinvestment from returning to smaller or growing businesses. Virginia Minni’s (University of Chicago Booth School of Business and CEPR) paper “Reorganizing work inside the firm: task change after technology adoption” reinforced the point. Her research on technology adoption in a Latin American bank showed that while technology can increase routine productivity, reallocation of workers to higher-value tasks only happens when incentives are deliberately restructured. Without intentional organisational change, the gains from AI may not fully materialise.

2. The human cost of transition, fear, and reluctance to reskill, is central to AI’s inequality story

The workshop made clear that the transition to AI is not simply a technical or productivity challenge; it is a human one. Marco Leonardi (Università degli Studi di Milano) “Unwilling to Reskill? Evidence from a survey experiment with Italian jobseekers” shared his research on workers in the Italian labour market. His study found that respondents held biased beliefs about their reskilling options. Their sense of attachment to their previous occupation prevented them from making realistic judgements about other occupations they could be suited to. In the context of AI, this raises important questions about how people’s sense of professional identity shapes their willingness and ability to adapt.

Panelists echoed this throughout. One described the current atmosphere as “very charged,” acknowledging widespread fear of losing ideas, identity, and relevance. Several speakers noted that different firms will apply different levels of boldness in their use of AI, and that workers may express their own values through their affiliation with organisations that match their comfort level. The implication for inequality is significant: if adaptation to AI is mediated by identity and fear, then those already in precarious or less empowered positions may be less likely to engage with reskilling, which could widen existing gaps. UCL Stone Centre director Imran Rasul’s framing of the productivity J-curve underscored this: transitions involve an initial decline before gains materialise, and the costs of that dip within firms might not be borne equally.

3. Firms are striving for human-centric AI, but structural tensions remain unresolved

Across the panels, there was a strong consensus around the principle of keeping humans at the centre of AI adoption. Leaders from retail, consumer goods, and consulting firms all stressed a “people first” philosophy, emphasising semi-autonomous tools that support, rather than replace, human decision-making. One panellist summed it up: the goal is to keep people in the lead and have them use AI to create more value, not to cede control to fully autonomous systems. They noted that augmenting some tasks with AI reduced return on investment, and that some neurodivergent employees may find value and satisfaction in structured or repetitive work that might otherwise be considered for automation. Firms described investing in ethical guidelines, training programmes, and careful consideration of where AI augmentation genuinely improves outcomes.

Structural tensions persist, however. In recruitment, firms acknowledged the risk that AI reinforces existing biases, while the hiring process itself is becoming “AI versus AI”, with candidates using AI tools to optimise applications and employers using AI to screen them. Ingar Haaland’s (NHH Norwegian School of Economics and CEPR) keynote presentation “Implications of Transformative AI for Work and Human Capital" showed that a majority of candidates prefer AI driven interviews, but the increase in qualitative data generated can be labour intensive to assess. A panellist from a consumer goods firm embraced the changing landscape as an opportunity to find candidates who are truly the best fit for the firm.

The professional services panel raised a more fundamental issue: if AI deflates both the price and quantity of billable work, the traditional business model of charging for hours must change. Panellists pointed toward impact-based or value-based pricing as a possible solution. The question of who benefits from these efficiencies, and who is displaced by them, remains the central inequality question of the AI transition.

We are grateful for our partners at CEPR and the INSEAD Stone Centre for their collaboration on this insightful workshop.

For more insights on this topic, we recommend reading our recap of the Stone Centre Workshop on AI & Economics.

Authors

Stone Centre at UCL

Stone Centre at UCL.

Stone Centre at UCL