Ze Wang: Independent research at the intersection of AI and inequality
Stone Centre scholar Ze Wang spoke with us to discuss his research into how AI models evaluate job candidates, the mechanisms behind bias, and what it means for inequality.
How has the Stone Centre helped you pursue research you are passionate about?
The Stone Centre grant has been transformative in securing the independence of my research. This is particularly important in the field of AI, where industry funding can subtly influence how results are framed, which findings are emphasised, and even which methodologies are pursued. The Stone Centre's support means I can follow the evidence wherever it leads. Beyond funding, the Stone Centre's meetings and presentation opportunities have been invaluable. Engaging with scholars from different disciplines has shaped my thinking in ways I would not have anticipated, opening up new angles on research design and interpretation.
Tell us a bit more about your research agenda. How did the Stone Centre support it?
I am currently working on two projects supported by the Stone Centre. The first is the largest-scale audit of language models in hiring contexts to date, studying how AI models evaluate job candidates across demographic groups and investigating the internal mechanisms that drive these patterns. The second project explores how AI-generated recommendations influence human discriminatory behaviour in a realistic experimental setting. Both projects sit at the intersection of economics, computer science, and social policy, a space that the Stone Centre is uniquely suited to support.
How does your research contribute to the Stone Centre's mission of developing our knowledge of inequality?
My research focuses on hiring and discrimination, two areas that are likely to be central to how inequality evolves in the coming decades as AI becomes embedded in labour markets. These results will contribute to a deeper, more nuanced understanding of how new technologies reshape inequality, building on, and sometimes revising, what we thought we knew.
How far along with your project are you? How has it developed?
The first project is in its final stages. We are completing the manuscript and preparing for submission. The project has developed significantly from its original scope: what began as a behavioural audit expanded into a mechanistic investigation of why AI models produce the patterns we observe.
How do you hope to continue to develop your research in future?
I plan to deepen my exploration of how AI interacts with human society — both by documenting the current landscape and by anticipating what lies ahead — combining an economist's perspective on labour markets and discrimination with a technical understanding of how AI models are designed and how their internal mechanisms shape their outputs. I see this interdisciplinary approach as essential: we cannot understand AI's societal impact without understanding both the economics and the engineering.

