Dr. Annette Zimmermann is a political philosopher working on the ethics of algorithmic decision-making, machine learning, and artificial intelligence. She has additional research interests in moral philosophy, legal philosophy, and the philosophy of science. In the context of her current research project "The Algorithmic Is Political", she is focusing on the ways in which disproportionate distributions of risk and uncertainty associated with the use of emerging technologies—such as algorithmic bias and opacity—impact democratic values like equality and justice. At Princeton, she is based at the Center for Human Values and at the Center for Information Technology Policy. She holds a DPhil (Ph.D.) and MPhil from the University of Oxford, as well as a BA from the Freie Universität Berlin. She has held visiting positions at the Australian National University, Yale University, and SciencesPo Paris.
"The ethical and social implications of emerging technologies like AI and machine learning have given rise to significant public debate in recent years. The GradFUTURES Learning Cohort on the Ethics of Artificial Intelligence has been a platform for many illuminating and inspiring conversations across disciplinary boundaries, allowing graduate students from social science and humanities backgrounds to learn more about the complex technological foundations of AI, and enabling students with technical and scientific backgrounds to engage with the conceptual tools that political and moral philosophy offers for critically exploring important ethical dilemmas linked to emerging technologies.
As part of the seminar, we explored various career paths related to AI ethics, including opportunities to explore the tech industry, tech policy organizations, and advocacy groups. I have particularly enjoyed collaborating with Princeton's excellent graduate students in a number of public-facing writing projects, which explore questions like: What does it mean for an AI system to be just? How can machine learning lead to biased outcomes that risk undermining the civil and democratic rights of citizens? In what sense do AI systems need to be explainable, and is there an ethical right to an explanation? What does it mean to ‘trust’ an AI system? Is there a 'translation gap' between philosophers and computer scientists? How much does it matter who is (and isn't) involved in designing AI? I am grateful to Princeton University's Graduate School for seizing this important opportunity to foster students' interdisciplinary engagement with these urgent and fascinating questions."