Responsible AI Learning Cohort 2024 Session 5: Improving AI Governance by Carrot and by Stick

Date
Oct 31, 2024, 4:30 pm6:30 pm
Location
Online Event

Details

Event Description
Session Description: AI governance involves the confluence of policies, procedures, norms, laws, and tools that bring together diverse stakeholders to ensure the risks associated with AI systems are effectively managed throughout their development, procurement, and deployment. Organizations can be motivated to enhance governance through two approaches: "sticks," meaning the threat of penalties, and "carrots," meaning the prospect of rewards. While accountability-focused stakeholders often focus on sticks such as regulation and enforcement, successful governance also depends on well-designed carrots. In this talk, I will use AI documentation as a case study to examine how both carrots and sticks can drive improvements in AI governance and discuss the pitfalls of relying too heavily on either.

Series Overview: GradFUTURES is pleased to offer Responsible AI learning cohort in Fall 2024 in partnership with Center for Information Technology Policy (CITP) and Princeton University Press (PUP). Led by graduate students, this learning cohort leverages expertise of Princeton’s faculty, graduate students, staff and alumni, and external partners. The cohort will discuss and examine realities of Responsible AI principles: fairness, inclusiveness, transparency, reliability and safety, privacy and security, and accountability in diverse fields through guest speakers, case studies and immersive capstone. Upon successful completion of learning cohort, graduate students will receive a co-curricular certificate of completion and a micro-credential badge.

Responsible AI dimensions. We will follow the definition and dimensions of Responsible AI as outlined by Microsoft. "Responsible AI is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way. It's a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability."

Speaker: Amy Winecoff, Center for Democracy & Technology. 

Amy Winecoff brings a diverse background to her work on AI governance and policy, incorporating knowledge from both technical and social science disciplines. She is currently the AI Governance Fellow at the Center for Democracy & Technology, focusing on governance of AI systems.

Previously, Amy was a fellow at Princeton’s Center for Information Technology Policy, where she examined how cultural, organizational, and institutional factors shape emerging AI and blockchain companies. She has hands-on experience as a data scientist in the tech industry, having built and deployed recommender systems for e-commerce.

Amy obtained her Ph.D. in Psychology and Neuroscience from Duke University, where her research explored human reward processing and social decision-making.

Accessibility

To request accommodations for this or any event, please contact the organizer or James M. Van Wyck at least 3 working days prior to the event.