Date Sep 26, 2024, 4:30 pm – 6:30 pm Location Friend Center 008 Related link More details in My PrincetonU Share on X Share on Facebook Share on LinkedIn Details Event Description Series Overview: GradFUTURES is pleased to offer Responsible AI learning cohort in Fall 2024 in partnership with Center for Information Technology Policy (CITP) and Princeton University Press (PUP). Led by graduate students, this learning cohort leverages expertise of Princeton’s faculty, graduate students, staff and alumni, and external partners. The cohort will discuss and examine realities of Responsible AI principles: fairness, inclusiveness, transparency, reliability and safety, privacy and security, and accountability in diverse fields through guest speakers, case studies and immersive capstone. Upon successful completion of learning cohort, graduate students will receive a co-curricular certificate of completion and a micro-credential badge. Responsible AI dimensions. We will follow the definition and dimensions of Responsible AI as outlined by Microsoft. "Responsible AI is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability." Session Description: In the first session, we will get to know the cohort, provide summary of syllabus and expectations for completion. Guest speaker will discuss how tool and insights from cognitive science can be applied to characterize and design AI systems. Guest Speaker: Ted Sumers (*23, COS), LLM Safety at Anthropic Talk Title: Cognitive science for artificial intelligence. Princeton alum Ted Sumers will discuss how tools and insights from cognitive science can be applied to characterize and design AI systems. He will first present work that uses models of human communication to uncover the latent values encoded in large language models. He will then discuss how decades of research into cognitive architectures can be used to structure language agents. Finally, time permitting, we’ll discuss some open questions in AI alignment. Encouraged (but not required) pre-readings: How do LLMs Navigate Conflicts between Honesty and Helpfulness? (ICML '24) Cognitive architectures for language agents (TMLR '24) Accessibility To request accommodations for this or any event, please contact the organizer or James M. Van Wyck at least 3 working days prior to the event. Upcoming Professional Development Events Oct 11 Office Hours with Graduate Alum in Residence Jean Tom *93 CBE Oct 14 Fall Break Parallel Programming Bootcamp (Oct 14-15) Oct 14 Part 1, Wild Walks: Exploring Minerals and Elements in Sculptures Oct 15 Fall Break Parallel Programming Bootcamp (Oct 14-15) Oct 15 Inclusive Leadership Learning Cohort 2024: Session 5 "Social Change Model and Understanding Implicit & Explicit Bias" Oct 16 Introduction to Accelerated Data Science Oct 17 A.I. in the Cloud: Overview of Azure and GCP, Deep Dive into AWS Oct 17 Part 2, Wild Walks: Exploring Minerals and Elements in Buildings View All Events