Heidi Williams
Josh Schwartzstein
Harsh Gupta
Maya Durvasula
Marcella Alsan
Horng Chern Wong
Brian Amorim Cabaco
Weikai Chen
Clara von Bismarck-Osten
Matthew Nibloe
Julian Limberg
David Hope
Martin Nybom
Jan Stuhler
Mattia Fochesato
Sam Bowles
Linda Wu
Tzu-Ting Yang
Thomas Piketty
Malka Guillot
Jonathan Goupille-Lebret
Bertrand Garbinti
Antoine Bozio
Hakki Yazici
Slavík Ctirad
Kina Özlem
Tilman Graff
Tilman Graff
Yuri Ostrovsky
Martin Munk
Anton Heil
Maitreesh Ghatak
Robin Burgess
Oriana Bandiera
Claire Balboni
Jonna Olsson
Richard Foltyn
Minjie Deng
Iiyana Kuziemko
Elisa Jácome
Juan Pablo Rud
Bridget Hofmann
Sumaiya Rahman
Martin Nybom
Stephen Machin
Hans van Kippersluis
Anne C. Gielen
Espen Bratberg
Jo Blanden
Adrian Adermon
Maximilian Hell
Robert Manduca
Robert Manduca
Marta Morazzoni
Aadesh Gupta
David Wengrow
Damian Phelan
Amanda Dahlstrand
Andrea Guariso
Erika Deserranno
Lukas Hensel
Stefano Caria
Vrinda Mittal
Ararat Gocmen
Clara Martínez-Toledano
Yves Steinebach
Breno Sampaio
Joana Naritomi
Diogo Britto
François Gerard
Filippo Pallotti
Heather Sarsons
Kristóf Madarász
Anna Becker
Lucas Conwell
Michela Carlana
Katja Seim
Joao Granja
Jason Sockin
Todd Schoellman
Paolo Martellini
UCL Policy Lab
Natalia Ramondo
Javier Cravino
Vanessa Alviarez
Hugo Reis
Pedro Carneiro
Raul Santaeulalia-Llopis
Diego Restuccia
Chaoran Chen
Brad J. Hershbein
Claudia Macaluso
Chen Yeh
Xuan Tam
Xin Tang
Marina M. Tavares
Adrian Peralta-Alva
Carlos Carillo-Tudela
Felix Koenig
Joze Sambt

Ze Wang: Independent research at the intersection of AI and inequality

Stone Centre scholar Ze Wang spoke with us to discuss his research into how AI models evaluate job candidates, the mechanisms behind bias, and what it means for inequality.

How has the Stone Centre helped you pursue research you are passionate about?

The Stone Centre grant has been transformative in securing the independence of my research. This is particularly important in the field of AI, where industry funding can subtly influence how results are framed, which findings are emphasised, and even which methodologies are pursued. The Stone Centre's support means I can follow the evidence wherever it leads. Beyond funding, the Stone Centre's meetings and presentation opportunities have been invaluable. Engaging with scholars from different disciplines has shaped my thinking in ways I would not have anticipated, opening up new angles on research design and interpretation.

Tell us a bit more about your research agenda. How did the Stone Centre support it?

I am currently working on two projects supported by the Stone Centre. The first is the largest-scale audit of language models in hiring contexts to date, studying how AI models evaluate job candidates across demographic groups and investigating the internal mechanisms that drive these patterns. The second project explores how AI-generated recommendations influence human discriminatory behaviour in a realistic experimental setting. Both projects sit at the intersection of economics, computer science, and social policy, a space that the Stone Centre is uniquely suited to support.

How does your research contribute to the Stone Centre's mission of developing our knowledge of inequality?

My research focuses on hiring and discrimination, two areas that are likely to be central to how inequality evolves in the coming decades as AI becomes embedded in labour markets. These results will contribute to a deeper, more nuanced understanding of how new technologies reshape inequality, building on, and sometimes revising, what we thought we knew.

How far along with your project are you? How has it developed?

The first project is in its final stages. We are completing the manuscript and preparing for submission. The project has developed significantly from its original scope: what began as a behavioural audit expanded into a mechanistic investigation of why AI models produce the patterns we observe.

How do you hope to continue to develop your research in future?

I plan to deepen my exploration of how AI interacts with human society — both by documenting the current landscape and by anticipating what lies ahead — combining an economist's perspective on labour markets and discrimination with a technical understanding of how AI models are designed and how their internal mechanisms shape their outputs. I see this interdisciplinary approach as essential: we cannot understand AI's societal impact without understanding both the economics and the engineering.

Authors

Stone Centre at UCL

Stone Centre at UCL.

Stone Centre at UCL