Graduate students explore the ethics of artificial intelligence
Nineteen Princeton graduate students are examining the “Ethics of AI” this year in a Professional Development Learning Cohort offered through the Graduate School. The students plan to enter a variety of fields — from data science to politics — that in the future will require leadership in determining the use of artificial intelligence.
As artificial intelligence advances, the questions surrounding its use have become increasingly complex. To introduce students to the challenges the technology could present and to prepare them to engage in and lead conversations about its ethical use, the Graduate School this year is offering a Professional Learning Development Cohort titled “Ethics of AI.”
This cohort offering is part of the Graduate School’s larger commitment to equip students with skills they can apply across a full range of professional settings in which they may make important contributions after leaving Princeton.
Nineteen graduate students from various disciplines — including psychology, politics, mechanical and aerospace engineering, and quantitative and computational biology — are participating in the five-part learning series. Through presentations, case studies, readings and discussions, they are developing an awareness of the issues at stake and considering their application in real-world situations.
“A recurring theme I hear from leaders in the technology industry is that there is a growing need for people who can engage rigorously with fundamental ethical issues surrounding technological advances,” said Sarah-Jane Leslie, dean of the Graduate School. “A great many of Princeton’s graduate students are exceptionally well-placed to contribute precisely that robust ethical thinking, so we wanted to provide a forum for our students to deepen their knowledge of these issues.”
The cohort began meeting in October 2018 and will conclude its work in March. The participants will have the opportunity to put their theoretical discussions into practice this summer through internships at ServiceNow, a California-based corporation that offers cloud computing to businesses, governments and educational institutions worldwide.
“Thanks to ServiceNow, our students have the opportunity to concretely understand these issues from a fully applied industry perspective,” Leslie said.
At a lunch meeting of the cohort, Elena Di Rosa, second from left, a Ph.D. candidate in philosophy, shares her views about the use of AI technology from the perspective of an ethicist.
Ethics of AI session leaders include Ed Felten, the Robert E. Kahn Professor of Computer Science and Public Affairs and director of the Center for Information Technology Policy (CITP); Chloé Bakalar, assistant professor of political science at Temple University and a visiting research collaborator at CITP; Bendert Zevenbergen, a visiting research collaborator at CITP; and Annette Zimmerman, a postdoctoral research associate in values and public policy.
CJ Desai, chief product officer of ServiceNow, and Haleh Tabrizi, director of analytics and insights at ServiceNow, also have visited with the group to provide an industry perspective.
Bakalar and Zevenbergen originally developed the case studies for Princeton Dialogues on Ethics and AI, a research collaboration between CITP and the University Center for Human Values.
In a recent meeting in CITP’s offices in Sherrerd Hall, the students shared lunch while they chewed over a case about the use of data analytics in the public sector. They were presented with a scenario where the mayor of a town engages a data analytics firm to institute a violence-reduction program.
The students examined the problem from multiple angles: the perspective of the analytics firm and whether it should have taken on the project at all; the predicament of the mayor, concerned about an uptick in crimes and weighing budgetary constraints; the concerns of the public about the use of the data; and the program’s overall impact on civil liberties.
Bakalar said the treatment of the ethical and social questions involved in these case studies is designed to be non-alarmist. “There would be no clearly ‘good’ or ‘bad’ guys in the stories, and no simple, easy answers,” she said. “Instead our aim was to embrace the moral and political complexities of the design and deployment of AI technologies in our world.”
The students also are assigned complementary readings to situate the cases and to prime the ethical discussions that arise.
“The students not only share their own thoughts and opinions, but seem willing to try on different hats and sometimes even play devil’s advocate,” Bakalar said.
Annette Zimmerman, left, a postdoctoral research associate in values and public policy, and Ed Felten, right, the Robert E. Kahn Professor of Computer Science and Public Affairs and director of the Center for Information Technology Policy, are among the academic and industry experts leading discussions over the course of the group’s five meetings.
Elizabeth Davison, a Ph.D. student in mechanical and aerospace engineering, said she joined the cohort since the topic relates to her future work as a data scientist.
“A fascinating theme is the concept that the choices engineers make when designing and building their models have vast implications beyond accuracy and interpretability,” she said. “Fairness and accountability are two concerns that we have discussed in the context of AI in society — for example, if a machine determines an outcome, who is accountable if that decision is implemented? The learning and growth afforded by examining questions that span engineering and ethics with a cohort that has expert knowledge across fields has been incredible.”
Jeff Simon, a joint MPA/J.D. student at the Princeton School of Public and International Affairs who plans to go on to a career in public service, said the discussions will be useful in his work as an advocate for better public policies.
“The growing influence of AI means that policymakers will need to be informed about the topic,” Simon said. “Understanding the potential promise and harms of AI and the ethical problems it poses is really important to preserving our constitutional and legal freedoms and for creating public policies that benefit people in the age of AI.”
Elena Di Rosa, a Ph.D. student in philosophy whose research interests lie in ethics, said the group offers a unique opportunity for cross-disciplinary discussion among people with diverse research backgrounds and areas of expertise.
“From the standpoint of someone in academic philosophy, it is exciting to learn about a new realm, particularly one of such great practical import, in which philosophers can apply their training and hopefully offer some unique insights,” Di Rosa said. “I have learned so much from the other members of the learning cohort. They have raised ethical issues that they have encountered in their respective fields, and given that I have little experience in some of these fields, I may not have otherwise considered the particular ethical dilemmas that they have faced.”
The Ethics of AI Professional Development Learning Cohort is the fourth co-curricular, cohort-based learning series sponsored by the Graduate School. The first, held in the fall 2017, gave students a closer look at the history, culture and challenges of American higher education. Other cohort series have focused on academic publishing and on venture capital and startups.
A future cohort series will explore arts management.