The Stone Centre Workshop on AI & Economics
“If students lose trust in the ability of universities to award individually differentiated degrees, why should they come?”.
This challenge from UCL Vice Provost Kathleen Armour kicked off the day’s work in a packed room at the Institute of Education.
The first session took up the challenge, discussing how assessment design needs to adapt in light of the challenges posed by GenAI. This Stone Centre-funded event was organised by a group of academics from CTaLE, working with universities across the UK (including Stone Centre advisory board member Parama Chaudhury) in partnership with the Royal Economic Society.
AI & Economics: Rethinking Assessments for the Future took place on Tuesday, 10th June.
Session 1: The Current Impact of Generative AI on Economics Assessments
Chaired by Alvin Birdi, Director of the Economics Network, this panel — featuring Carlos Cortinhas (Exeter), Peter Postl (UCL), and Swati Virmani (De Montfort) — delved into how assessment design must evolve in response to the large language models available to students.
The discussion kicked off with Carlos Cortinhas presenting findings from a new survey, which revealed that UK academics are generally better informed about institutional AI policies than their non-UK counterparts (primarily from the US). Most universities surveyed "allow but don't encourage" student AI use. Respondents also believed they could adapt by designing assessments around AI weaknesses, such as presentations and in-person exams.
Swati Virmani highlighted the lack of clear, student-facing AI policies, questioning the usefulness of "allow but don't encourage". She also pushed back against Kathleen Armour’s insistence on in-person assessments, suggesting it could be a regressive step although important for foundational modules. This is a hot topic – coming up against funding and logistical constraints in some universities. Peter Postl pointed to the Economics Department’s favourable experience of piloting in-person and scanned assessments for large cohorts this session.
The panel debated how to optimise for both academic integrity and AI literacy. Carlos argued this is achievable with proper training for academics on AI's capabilities and ethical use, advocating for a shift from a "culture of fear" to one of "transparency." Scalability of alternative assessment forms like vivas was a concern for modules with large cohort sizes. Swati stressed the need to redefine ‘academic misconduct’ around non-disclosure, and the unfairness of AI penalties without clear ethical guidance. She called for expanding AI literacy to include understanding AI's limitations, weaknesses and biases; as only then we genuinely test it as a learning outcome/skill.
To incentivise academics' AI adoption, Peter Postl suggested showcasing helpful examples of AI in econometric data analysis to counter the perception of it as a shortcut. He also advised teaching students how AI works, encouraging its use, and requiring them to report it.
With many current forms of assessment compromised by AI and most universities not carrying out robustness assessments of their examination practices, Carlos Cortinhas argued assessments should be redesigned with "multiple points of contact" and with a focus on "learning outcomes". The key principles which could drive this redesign are up for debate. Possibilities included analytical skills, problem-solving, creativity, real-world learning, annual assessment design refreshes, clear policies, and ensuring inclusivity, particularly for students using AI for text-to-speech or translation.
While questions remain particularly around confidentiality and ownership, AI offers compelling benefits for instructors. The panel agreed GenAI could speed up marking and personalise feedback to a degree not currently possible.
The session concluded with a poll: roughly a third of attendees indicated their institutions provided some AI training for staff, and a similar number for students.
Session 2: Skills Economics Graduates Need in an AI-Enabled World
The second session chaired by Cloda Jenkins (Imperial), featured Tom Aldred (Government Economic Service), Cezary Klimczuk (Finalto Trading), Ylva Baeckström (KCL), Edwin Ip (Exeter), and Dr. Ramin Nassehi (UCL). The panel was challenged to define the AI and non-AI skills economics graduates need to ensure they continue to remain attractive to employers.
Understanding how economists use AI can provide a framework for the AI skills students should be taught. To that end, Ramin Nassehi opened the discussion with a survey. Ramin surveyed 114 economists about their use of AI and found it fell broadly into three categories: calculations, research/communications, and problem-solving. When respondents were asked what they thought graduate AI skills should encompass, their views aligned with their current use cases.
But what about use of AI in the workplace more broadly? The panel shared their experiences, citing analysis, coding, research, and recruitment. Ensuring graduates have AI literacy in these areas is not without its challenges, though. Edwin Ip noted a 50% dropout rate for AI-based recruitment, particularly amongst women. Despite this, women are more likely to be evaluated favourably by AI due to a lack of bias. Clearly then, it’s essential graduates understand more than just the application of AI; they must be taught about its limitations and flaws. This remains difficult given the rapid evolution of current models.
On preparing students for AI-driven interview processes, Edwin Ip noted that the pace of change means the goalposts are constantly moving. He ironically advised that "average" CVs were more likely to be scored better than exceptional CVs by current AI models, since what a human would consider outstanding could appear as an outlier to an AI. But once again, the overall reduction of bias in AI-driven recruitment processes was praised as a major advantage for female and minority graduates.
Despite widespread use of AI, the panel noted some pushback against its use in the workplace. Cezary Klimczuk argued AI could not replace human decision-making in the world of trading. Regulatory restrictions on AI tools would ensure human oversight would remain more important than any AI-driven efficiency gains.
As the panel came to a close, there was consensus on the importance of interpersonal relationships, teamwork, critical thinking, and asking the right questions. If universities ensure graduates have these skills, they should remain attractive to employers in an AI-enabled world. In turn, university economics education should retain its value.
For more information, please visit the CTaLE website, where you can find the programme and view the attached lecture slides.