Graduate students from the humanities and the sciences explore the intersection of ethics and artificial intelligence (AI) in a collaborative environment with academic researchers and industry leaders. Through presentations, case studies, short readings, debate and discussion, participants develop an awareness of the issues at stake. Graduate students will:
- Understand and critically evaluate the ethical and social implications of emerging technologies related to artificial intelligence and machine learning;
- Investigate how AI Ethics could become part of their academic research, as well as discover how their research can meaningfully inform the public debate on AI Ethics, or provide an entryway into diverse career paths into the public and private sector;
- Gain practical insights into (i) the tech industry and (ii) technology policy-making, and professional skills required for successfully engaging with (i) and (ii), for instance by attending relevant events and conferences, writing for a broad audience, and pursuing job and internship opportunities related to AI Ethics. Past participants in this cohort have written op-eds on the topic (see link), been a fellow with Responsible AI Institute, taken positions in industry responsible AI divisions, and begun research projects in graduate school and beyond to advance knowledge in the field.
Co-sponsored by GradFUTURES, the Center for Human Values, and the Center for Information Technology Policy
Notify Me When GradFUTURES Learning Cohorts Launch!
Ethics of AI Participants
"I learned a lot, I met other students with similar interests, and I formed lasting relationships with experts in the respective fields."
Ashley has been at the forefront of building tools and policy interventions to support the responsible use and adoption of innovative technologies, both with her work at the Government of Canada, and as the Executive Director of AI Global, a multi-stakeholder non-profit dedicated to mitigating harm and unintended consequences of AI systems…
Ben Hertzberg is Vice President, Team Lead for Global CIO Research at the Gartner Research Board. Gartner Research Board is a member-driven research community designed for Gartner's largest client companies (greater than $10B in annual revenue). Hertzberg was previously Director, Research on Gartner's Chief Data and Analytics Officer Research…
Yatin Manerkar joins the Computer Science and Engineering department at the University of Michigan as an Assistant Professor in Fall 2021. He completed his PhD at Princeton University in 2020, where he was advised by Prof. Margaret Martonosi, and he was a postdoctoral researcher at UC Berkeley with Sanjit Seshia in 2021. He has also worked full…
Ben is an advisor at Google on responsible innovation, ethics, and policy issues. He works closely with research and product teams. Previously, Ben spent three years at Princeton as a research fellow. He completed his PhD at Oxford, worked at the EU, practiced as a lawyer, and holds a law degree.
Ben also taught in the first iteration…
Upcoming GradFUTURES Learning Cohort Events
No content available to show.
Session 1: February 8, 2021 Noon-1:30 pm ET, Zoom
Kick-of Session to the Ethics of AI GradFUTURES Learning Cohort
Academic and industry researchers will frame the current conversation in the Ethics of AI and lead participants in two case studies.
- Annette Zimmermann, Lecturer (Assistant Professor) in Philosophy at the University of York, and a Technology & Human Rights Fellow at Harvard University
- A Stevie Bergman *19, Research Scientist/Postdoctoral Researcher in Fairness and Responsibility in AI at Facebook
- Ben Zevenbergen, Google
- Steven Kelts, Lecturer in Political Theory and the Center for Human Values, Princeton University
- Yatin Manekar *21, incoming Assistant Professor in the Computer Science and Engineering department at the University of Michigan beginning Fall 2021
Session 2: February 19, Noon-1:30 pm ET
Lead: Stevie Bergman *19, Facebook
Topic: Governance of AI systems - Practical implementations & limitations, and industry.
Pre-Reading: Podcast AI & Human Rights - https://soundcloud.com/asteviebergman
Session 3: March 1, Noon-1:30 PM ET
Lead: Steven Kelts, PhD, Lecturer, Political Theory and Center for Human Values, Princeton
Topic: What are the biases inherent in ethical decision making and the difficulty of ethical decision making in institutional environments where information is unevenly distributed? What would it take to educate engineers (or anyone) within an AI firm to make moral judgments about the tech they’re developing, especially when this tech can begin to make certain unexpected judgments itself?
Bio: Steven Kelts is a long-time ethics educator. He has twenty years of experience working with undergraduates, including in residential education environments and intensive, selective seminars. His research is on the history and uses of market ideas, including theories of organization of the firm. He consults in the private sector with companies looking to synergize their market value with their ethical values, and to develop curricula to help their employees navigate ethical pitfalls in their organizational culture.
Session 4: March 15, Noon-1:30 pm ET
Lead: Ben Zevenbergen, Google
Topic: The ethical, social, and legal impacts of Internet technologies.
Bio: Ben recently joined Google to work on the intersection of machine learning, ethics, law, and policy. Ben has a law degree from the universities of Leiden and Amsterdam, a PhD from Oxford University, and a postdoc at Princeton University (CITP and UCHV). Previously, Ben practiced as a technology lawyer and was an Internet policy advisor at the EU.
Session 5: March 29, Noon-1:30 pm ET
Lead: Annette Zimmermann, University of York
Topic: Who is responsible for solving algorithmic injustice?
There is plenty of evidence of algorithmic injustice in many different domains, from criminal justice to social welfare, from education to credit scoring. Who bears the primary responsibility for making algorithmic systems more just? Individual engineers? Researchers and experts? The private corporations who sell algorithmic tools? Governments and public institutions which buy and use those tools? Or all of us, the community of democratic citizens as a whole?
Bio: Annette Zimmermann is a Lecturer (Assistant Professor) in Philosophy at the University of York, and a Technology & Human Rights Fellow at Harvard University. Dr Zimmermann’s current research focuses on the political and moral philosophy of AI and machine learning. Before that, Dr Zimmermann was a postdoctoral fellow at Princeton University (2018-2020), with a joint appointment at the Center for Human Values and the Center for Information Technology Policy. Prior to that, they were awarded a DPhil from Nuffield College at the University of Oxford, for work focusing on contemporary analytic political and moral philosophy—in particular, democratic decision-making, justice, and risk.
Dr Zimmermann's recent research visitor positions include Yale University (2016), the Australian National University (2019) and Stanford University (2020). They have advised policy-makers on AI ethics issues at UNESCO, the Australian Human Rights Commission, the UK Centre for Data Ethics and Innovation, and the OECD. In recognition of their research, Dr Zimmermann has received the 2020 David Roscoe Early Career Award in Science, Ethics, and Society by the Hastings Center, and they have been named on the 2021 "100 Brilliant Women in AI Ethics" List.
Session 6 April TBD
Speaker: Yatin Manerkar *21, incoming Assistant Professor in the Computer Science and Engineering department at the University of Michigan beginning Fall 2021
Topic: Why is AI hard? A technical perspective
Bio: Yatin Manerkar *21 is an incoming Assistant Professor in the Computer Science and Engineering department at the University of Michigan beginning Fall 2021. He completed his PhD at Princeton University in 2020, where he was advised by Prof. Margaret Martonosi. He has also worked full-time at Qualcomm Research, and interned at AMD Research and Amazon Web Services. Yatin's research develops automated formal methodologies and tools for the design and verification of computing systems. His work has been recognized with two best paper nominations, and three of his papers have been honored for their high potential impact as either Top Picks or Honorable Mentions in IEEE Micro's annual "Top Picks" issue. Yatin is a recipient of the Wallace Memorial Fellowship, one of Princeton's highest graduate honors awarded to approximately 25 PhD students annually for a senior year of their doctoral studies. He also received the 2019 Award for Excellence from Princeton's School of Engineering and Applied Science.