Share on X Share on Facebook Share on LinkedIn Photography: Sameer Khan/Fotobuddy May 27, 2025 Authors: Margarita Belova (GS, ECE and GradFUTURES Professional Development Associate) and Rachel Metzgar (GS, PSY and GradFUTURES University Administrative Fellow)The Graduate School’s GradFUTURES professional development program offered an interdisciplinary Responsible AI learning cohort in fall 2024 for graduate students, in partnership with the Center for Information Technology (CITP) and Princeton University Press (PUP). The GradFUTURES Responsible AI learning cohort responded to modern demands by equipping students with the knowledge and skills needed to interact with the rapidly evolving landscape of AI technology. This multidisciplinary curriculum was designed to introduce graduate students to ethical AI practices. “CITP values partnering with the Graduate School’s GradFUTURES program due to the program’s ability to reach a diverse group of graduate students and further our goal of training future leaders at the intersection of AI and society” said Tithi Chattopadhyay, Executive Director of Center for Information Technology (CITP).Hosted at the Princeton University Press (PUP) building and leveraging expertise of faculty, staff, graduate alums, graduate students and external partners, the Responsible AI learning cohort featured eight sessions and mentored experiential capstone projects. Graduate students received a micro-credential (LinkedIn badge) and certificates for successfully completing the sessions, assignments and capstone projects. “One of the many joys of working in university press publishing is the chance to engage in campus learning communities and the generative collaborations that create them. Our partnership with GradFUTURES has shaped PUP in a multitude of ways, and most recently through participation in the Responsible AI learning cohort. At a moment in our history- and for our future- in which we are learning how to create a responsible framework for AI as a scholarly publishing tool, to be in the company of graduate students and a range of amazing speakers has strengthened both our knowledge and our intentions. This experience is just the latest chapter in a wonderful narrative of collaboration with GradFUTURES, which is positively impacting our publishing future.” -Christie Henry, Director, Princeton University Press.An interdisciplinary group of graduate students from all four divisions learned about six Responsible AI dimensions, proposed by Microsoft: Fairness, Safety, Privacy, Inclusiveness, Transparency, and Accountability. Guest speakers covered one or two core dimensions through the socio-technical lens. By absorbing and discussing perspectives from research and development, tech applications, public policy and journalism, the cohort grappled with how to ethically design and deploy AI tools for societal good. Photography: Sameer Khan/Fotobuddy Arvind Naryanan (Professor, COS and Director, CITP) and Sayash Kapoor (GS, COS and CITP), keynote speakers for the learning cohort, discussed insights from their recent book ‘AI Snake Oil’ (published by PU Press) about pitfalls of using AI in scientific research. Their talk prepared listeners to be critical about AI hype to discern “Snake Oil” in AI sold to the public from working solutions. Additionally, Narayanan and Kapoor showcased that a good amount of AI-backed research in STEM proves difficult to replicate, or can be error-prone due to embarrassingly basic mistakes, such as test dataset contamination with training data. This session also cautioned listeners against the exaggerated claims frequently made by company spokespersons eager to market their products, and the inflated expectations these claims tend to generate. Photography: Sameer Khan/Fotobuddy Over-reliance on technology to solve societal issuesA recurring theme throughout the cohort was an increasing over-reliance on technology, a concern raised by experts like Surya Mattu in the session “The Impact of AI and Algorithms on Society”. Mattu pointed out that attempts to leverage machine learning tools for predicting human behavior in the long term are futile so far. Worse, these tools often predict the future outcome by exploiting biases inherited from society, exemplified by the crime risk assessment software case. The underlying ML algorithm learned to use race as a key factor to judge if an arrested person commits a crime again, and it was biased against Black defendants. ProPublica's detailed analysis of over 7,000 risk scores from Broward County, Florida, demonstrated this bias clearly, showing that Black defendants were nearly twice as likely as white defendants to be incorrectly labeled as high-risk, while white defendants were disproportionately labeled low-risk despite reoffending, even after statistically controlling for relevant factors such as criminal history, age, and gender. This evidence sparked deep conversation among students on the AI technology regulations for societal decision making and the possible requirements for decision algorithm transparency and accountability. For participants, the discussion shed light on how the ubiquitous adoption of AI may impact social institutions, tech, and our personal lives. Their reflections collectively underscore the balance needed to manage AI’s benefits without falling into a trap of over-dependence on technology. Sebastián Rojas Cabal (GS, SOC) remarked, “The learning cohort was a great opportunity to deepen knowledge about how AI systems are deployed within organizations. It opened my mind to a host of implementation issues beyond the merely technical aspects of how AI systems work.” Another cohort participant, Md. Baky Billah (*25 SPIA) emphasized that without genuine efforts by policymakers, governments, and corporations, digital technology risks undermining its potential benefits to society. Another major topic that emerged in the cohort was ethical issues in AI development and deployment. During Vikram Ramaswamy’s (*23 COS) session “Constructing datasets”, students were surprised by how often efficiency outweighs humanistic values in dataset collection. For example, some employers pay meager compensation to data collectors or present them with shocking content. In response to Vikram’s talk, graduate students in the cohort underscored the need for a deeper understanding of ethical considerations as AI becomes increasingly integrated into society. Katie Deal (*25 SPIA) reflected, “I thoroughly enjoyed Vikram's presentation, particularly given his focus on the practical implementation of ethical considerations in product design. Understanding the tensions between identifying when someone's data is an object for compensation (commodity) or a subject for interpretation (identity) helped me think through questions that could help the broader industry understand two key issues - ethical compensation for labor to produce high-quality data, and incentivizing consumer demand for high-quality, ethically-collected data” Challenges in responsible implementation of AI tools Another theme that emerged through the cohort discussions was practical challenges in implementing AI tools in ways that maximize benefits and minimize potential harm. A session led by cognitive scientist Ted Sumers (*23 COS) focused on these real-world complexities, examining how the tension between helpfulness and truthfulness in AI systems can greatly influence outcomes. Students learned how AI models are often optimized to prioritize “truthfulness” over helpfulness, even when truthfulness might not be the best strategy to achieve a goal. Reflecting on the talk, Andrea Beadle (*25 SPIA) noted, “I assumed that AI was just aiming to present ‘true’ information or facts. From this seminar, I realized that the way humans communicate is more complex than that, and translating our sophisticated communication patterns to machines is quite challenging.” She suggested the value of educating end-users about the assumptions AI systems make, and suggested giving them the ability to adjust these assumptions so that AI output can be tailored, such as whether one prioritizes truthfulness or user-centric helpfulness. Reflections of graduate students in the cohort illuminate the complexity inherent in designing AI systems that responsibly support dynamic human needs. Whether the emphasis is on truthfulness, user satisfaction, or other values, one of the core implementation challenges is reconciling AI’s human-directed goals with its ethical responsibilities and real-world consequences, and complexities that may not be obvious at first. Opportunities for generative learning and future engagements After a semester of wrestling with the complexities and challenges of Responsible AI practices, participants praised the program’s ability to foster peer learning and spark new perspectives. Gemma Sahwell (GS, GEO) shared, “The Responsible AI learning cohort was a wonderful experience! I loved getting to learn from my peers across several different disciplines and hearing from a diverse range of experts in the field of AI and responsible AI applications. I would highly recommend this learning cohort to any future graduate student who is interested in learning more about AI and the future of responsible tech.” For many students, the sessions offered invaluable insights into the rapidly evolving AI landscape, along with space to grapple with broader societal and policy questions. Bobby Ge (GS, MUS) noted how the collective curiosity within the cohort made the most complex topics approachable: “I initially felt intimidated by the richness and depth of the subject matter, but being with others who were (for the most part) just as new to the field as I was made for a fun, collegial time. The speakers were uniformly brilliant and possessed exciting insights and clarity of vision.” Photography: Sameer Khan/Fotobuddy By bringing together graduate students, faculty, staff, graduate alumni, and external partners, the GradFUTURES Responsible AI learning cohort fostered a truly unique and informed interdisciplinary community. Looking ahead, there is a clear need for continued engagement on the ethics, governance, and societal impacts of AI. Participants left the program with not only greater technical knowledge but also a shared commitment to shaping the future of technology responsibly, and a sense of excitement for the collaborations and discoveries still to come. “The interdisciplinary GradFUTURES Responsible AI cohort encapsulated the diverse specializations and ways of thinking required for approaching complex socio-technical problems like developing responsible tech across all fields” said Sonali Majumdar, Assistant Dean for Professional Development. “Graduate students from this cohort continue to deepen their skills and commitment to responsible tech through GradFUTURES experiential fellowships with myriad hosts such as NJ AI hub, AI Lab and Tech United NJ, among others. ” Related People Margarita Belova, GS, ECE Rachel Metzgar, GS, PSY Christie Henry Sayash Kapoor, GS, COS and CITP Sonali Majumdar Arvind Narayanan Surya Mattu Vikram Ramaswamy, *23, COS Ted Sumers, *23, COS GradFUTURES Stories & News Social Impact Fellow Laurel Cook writes about NJ housing shortage for New Jersey Future June 16, 2025 The Scholar's Take: ‘A Play is a Thousand Stories’ — Lottie Page on Primary Trust June 6, 2025 Graduate Students Examine ‘Tech and Society’ Issues in the GradFUTURES Responsible AI Learning Cohort May 27, 2025 The Scholar's Take: Sylvia Onorato on Primary Trust May 27, 2025