Date Sep 26, 2024, 4:30 pm – 6:30 pm Location Friend Center 008 Related link More details in My PrincetonU Share on X Share on Facebook Share on LinkedIn Details Event Description Series Overview: GradFUTURES is pleased to offer Responsible AI learning cohort in Fall 2024 in partnership with Center for Information Technology Policy (CITP) and Princeton University Press (PUP). Led by graduate students, this learning cohort leverages expertise of Princeton’s faculty, graduate students, staff and alumni, and external partners. The cohort will discuss and examine realities of Responsible AI principles: fairness, inclusiveness, transparency, reliability and safety, privacy and security, and accountability in diverse fields through guest speakers, case studies and immersive capstone. Upon successful completion of learning cohort, graduate students will receive a co-curricular certificate of completion and a micro-credential badge. Responsible AI dimensions. We will follow the definition and dimensions of Responsible AI as outlined by Microsoft. "Responsible AI is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability." Session Description: In the first session, we will get to know the cohort, provide summary of syllabus and expectations for completion. Guest speaker will discuss how tool and insights from cognitive science can be applied to characterize and design AI systems. Guest Speaker: Ted Sumers (*23, COS), LLM Safety at Anthropic Talk Title: Cognitive science for artificial intelligence. Princeton alum Ted Sumers will discuss how tools and insights from cognitive science can be applied to characterize and design AI systems. He will first present work that uses models of human communication to uncover the latent values encoded in large language models. He will then discuss how decades of research into cognitive architectures can be used to structure language agents. Finally, time permitting, we’ll discuss some open questions in AI alignment. Encouraged (but not required) pre-readings: How do LLMs Navigate Conflicts between Honesty and Helpfulness? (ICML '24) Cognitive architectures for language agents (TMLR '24) Accessibility To request accommodations for this or any event, please contact the organizer or James M. Van Wyck at least 3 working days prior to the event. Upcoming Professional Development Events Jan 18 SFPUL: Burns Night at the Nassau Club Jan 21 Speaking for Impact, Influence & Connection: 2-Day Workshop Intensive & Communication Coaching Jan 22 Speaking for Impact, Influence & Connection: 2-Day Workshop Intensive & Communication Coaching Jan 23 Working in the Creative Arts and Public Humanities Jan 28 2025 GradFUTURES Focus on Future You(s) for science and engineering: Part 1 of 3 Jan 29 Building Resilience: Mindfulness and Stress Management for Academic & Professional Success Jan 29 2025 GradFUTURES Focus on Future You(s) for science and engineering: Part 2 of 3 Jan 30 2025 GradFUTURES Focus on Future You(s) for science and engineering: Part 3 of 3 View All Events