Where technology and responsibility meet : Register Here! Examine and explore the principles of Responsible AI Register Here! After running Ethics of AI learning cohort for many years, GradFUTURES is pleased to offer Responsible AI learning cohort in Fall 2024 (Sep 26- Dec 5) in partnership with Center for Information Technology Policy (CITP) and Princeton University Press (PUP). Co-developed by graduate students, this learning cohort leverages expertise of Princeton’s faculty, graduate students, staff and alumni, and external partners. The cohort will discuss and examine realities of Responsible AI principles: fairness, inclusiveness, transparency, reliability and safety, privacy and security, and accountability in diverse fields through guest speakers, case studies and immersive capstone. Upon successful completion of learning cohort, graduate students will receive a co-curricular certificate of completion and a micro-credential badge.Responsible AI dimensions. We will follow the definition and dimensions of Responsible AI as outlined by Microsoft (source). Responsible AI is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability Learning Objectives Understand the ethical and social implications of emerging AI technologies.Discuss realities of AI fairness, inclusiveness, transparency, reliability and safety, privacy and security, and accountability.Learn how dimensions of Responsible AI are being implemented in practice.Discuss emerging technologies, open questions, risks and challenges of a fast moving and evolving field.Gain practical insights of the tech industry and technology policy-making.Examine current trends and future directions of Responsible AI (including dimensions discussed) in use of AI across different focus areas (such as policy and governance, health and medicine, tech, arts and education, among others) through capstone projects. Fall 2024 Sessions Sep 26 Responsible AI Learning Cohort: Session 1- Introduction and Cognitive Science for AI Oct 3 Responsible AI Learning Cohort: Session 2- Constructing datasets for machine learning Oct 10 Responsible AI Learning Cohort: Session 3- Power to the Public: The Promise and Emerging Dangers of Public Interest Technology Oct 24 Responsible AI Learning Cohort 2024 Session 4 Oct 31 Responsible AI Learning Cohort 2024 Session 5: Improving AI Governance by Carrot and by Stick Nov 7 Responsible AI Learning Cohort: Session 6- The Reach of Fairness Nov 14 Responsible AI Learning Cohort: Session 7- Overcoming pitfalls in the use of AI in science Nov 21 Responsible AI Learning Cohort 2024 Session 8: Stress-Testing Responsible AI for Now and Later Dec 5 Responsible AI Learning Cohort 2024 Capstone Presentations Speakers and Organizers Margarita Belova, GS, ECE Professional Development Associate 24-25 Sayash Kapoor, GS, COS and CITP Co-author, AI Snake Oil Lydia T. Liu Assistant Professor, Computer Science and CITP Sonali Majumdar Assistant Dean for Professional Development Surya Mattu Lead, Digital Witness Lab, CITP Rachel Metzgar, GS, PSY University Administrative Fellow Arvind Narayanan Professor, COS; Director, CITP and co-author of AI Snake Oil Vikram Ramaswamy, *23, COS Instructor, Department of Computer Science Hana Schank Co-author, Power to the Public; Director of Strategy for Public Interest Technology at New America Sabrina Shih AI Policy Integration Lead, Responsible AI Institute Ted Sumers, *23, COS Member of technical staff, Anthropic Amy Winecoff AI Governance Fellow, Center for Democracy & Technology Co Sponsors “I've seen the A.I. Ethics cohort transform how graduate students think about their future careers—both in academia, and in the growing space of corporate research on the ethical implications of A.I. Each year it's an impressive inter-disciplinary group, bringing computer scientists, engineers, sociologists, psychologists, legal scholars and others together with ethicists. And we all leave the room with new questions to ask about our own disciplines!” –Steven Kelts, Lecturer, University Center for Human Values "The GradFUTURES AI Ethics Learning Cohort introduced me to machine learning and its wide-ranging social implications. Through the program, I learned to think more rigorously about the application of technology to various real-life scenarios. It’s provided an ethical lens for me to think critically about how our tools shape us. One of my projects as a Responsible AI Institute GradFUTURES Fellow involved assisting the Department of Defense with a project to integrate responsible AI practices into its procurement process. This opportunity gave me the chance to direct my graduate studies towards helping an institution with a relatively under-the-radar but pressing issue." –Lynne Guey, Graduate Student, SPIA Get Learning Cohort Alerts! Watch the GradFUTURES newsletter for upcoming learning cohort and event announcements, or complete the form below. GradFUTURES Learning Cohort Interest Form About GradFUTURES Learning Cohorts GradFUTURES’ interdisciplinary learning cohorts build community among and between graduate students and reinforce each student’s graduate training while drawing on their content knowledge to inform the cohort’s investigation of the topic. As part of the cohort, students will read and discuss books, articles, and case studies. Learning cohorts typically also include at least one experiential component such as an immersive project, a site visit, conference presentation, or fellowship/internship opportunities. Interdisciplinary discussions, reflection, synthesis, community building, and immersive experiences are integral components of each learning cohort experience.