Responsible AI Learning Cohort: Session 7- Overcoming pitfalls in the use of AI in science

Date
Nov 14, 2024, 4:30 pm6:30 pm
Location
Princeton University Press

Details

Event Description

Session Description: AI can be a valuable tool for scientists, but its use comes with risks. In this session we will discuss three such risks and how to avoid them. First, machine learning code is tricky and so ML-based scientific findings have a distressingly high rate of failing to reproduce due to modeling flaws. Second, even if the model is built correctly, translating claims about the model into claims about the world requires care and deliberation. Third, while it is tempting to apply predictive models to decision making, many ML-based decision-making systems may be ethically problematic. In the session, we will provide concrete guidance for avoiding these pitfalls.
 

Guest Speakers: co-authors of AI Snake Oil (Princeton University Press)

  • Sayash Kapoor, GS COS and CITP, TIME’s inaugural list of the 100 most influential people in AI.
  • Arvind Narayanan, Professor, COS and Director, CITP;  TIME’s inaugural list of the 100 most influential people in AI.



About AI Snake Oil: AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don’t work, and probably never will.

Series Overview: GradFUTURES is pleased to offer Responsible AI learning cohort in Fall 2024 in partnership with Center for Information Technology Policy (CITP) and Princeton University Press (PUP). Led by graduate students, this learning cohort leverages expertise of Princeton’s faculty, graduate students, staff and alumni, and external partners. The cohort will discuss and examine realities of Responsible AI principles: fairness, inclusiveness, transparency, reliability and safety, privacy and security, and accountability in diverse fields through guest speakers, case studies and immersive capstone. Upon successful completion of learning cohort, graduate students will receive a co-curricular certificate of completion and a micro-credential badge.

Responsible AI dimensions. We will follow the definition and dimensions of Responsible AI as outlined by Microsoft. "Responsible AI is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability."
 

Accessibility

To request accommodations for this or any event, please contact the organizer or James M. Van Wyck at least 3 working days prior to the event.