ACM ASSETS 2019 Workshop on AI Fairness for People with Disabilities
Sunday, October 27, 2019
Monongahela Room, Omni William Penn Hotel, Pittsburgh, PA
Sponsored by IBM Research and IBM Design
THIS WORKSHOP IS FULL.
This workshop is not part of the ASSETS conference program. It is fully booked. We cannot accept new registrations, but contact aiworkshop-assets19@acm.org if you would like to be added to the waiting list. Position papers from the workshop will appear in the October 2019 and June 2020 issues of the SIGACCESS Newsletter.
8:30 a.m.: Registration, Poster Set-Up. Pre-workshop breakfast generously provided by Microsoft.
9:00 a.m.: Welcoming Remarks from Workshop Organizers, and Attendee Introductions
9:30 a.m.: Keynote talk by Alexandra Givens, Executive Director of the Institute for Technology Law & Policy at Georgetown University
Alexandra Reeve Givens is the Executive Director of Georgetown Law’s Institute for Technology Law & Policy. The Institute educates students and produces law and policy research on issues such as inclusive innovation, the use of technology to promote access to justice, and how policymakers should respond to the opportunities and challenges presented by new technologies. It has recently launched a multi-year project on algorithmic fairness and the rights of people with disabilities. Alexandra previously served as the Chief Counsel for Intellectual Property and Antitrust on the Senate Judiciary Committee, where she worked on issues relating to innovation and consumer protection. She began her career as a litigator at Cravath, Swaine & Moore in New York City, after graduating from Yale University and Columbia Law School. In addition to her role at Georgetown, Alexandra serves as Vice Chair of the Christopher and Dana Reeve Foundation, which is dedicated to funding innovative research and improving quality of life for the millions of people living with paralysis. Alexandra was 11 when her father, Christopher Reeve, experienced a life-altering spinal cord injury that caused him to be paralyzed from the neck down. She has served on the Board of Directors since 2006.
The Legal Framework: Assessing the Legal Tools to Address Algorithmic Fairness for People with Disabilities
Abstract: Recent years have seen a surge in conversations about fairness, accountability and transparency in the use of algorithmic systems. Mainstream newspapers describe the flaws of hiring tools that perpetuate inequity by selecting candidates who resemble current employees. Researchers have underscored the disparate impact of “risk assessment” scores that assess a criminal defendant’s likelihood of reoffending using proxies for income and race. Increasingly, researchers, advocates, and—to some extent—the broader public are raising alarm about the ways in which algorithmic systems can replicate, mask, codify and scale existing biases. The potential impact of such systems on disabled people has so far received much less attention.
This presentation examines the handful of court cases that have begun to challenge algorithmic systems for discrimination and other harms. It explores the legal theories that will likely underpin future challenges, including by describing how current companies adopting such systems seem to be interpreting the legal risks. It then presents several distinct ways in which these legal theories fall short in upholding the rights and interests of people with disabilities. Drawing on these vulnerabilities, the presentation calls for a research agenda that, among other things:
- more fully documents the ways in which algorithmic systems may exclude or adversely impact people with disabilities;
- addresses how the diversity of disabilities, and many people’s understandable reluctance to disclose their disabilities, complicate efforts to detect bias and respond to it;
- recognizes that people’s right to privacy in matters of personal health stands in particular conflict with algorithms that harm people based on inferences about their perceived health or ability;
- grapples with fundamental questions about whether predictive models can ever adequately consider individuals who are statistical “outliers”, or whose diversity of circumstances defy category-based treatment;
- interrogates the objectives, outcomes, and inherent trade-offs involved in using algorithmic systems, doing so in a way that centers the interests of the user-subject, not just the entity using the system; and
- examines the obligations (legal and non-legal) of entities designing and deploying algorithmic systems, the rights of user-subjects of these systems, and how those obligations and rights can be protected and enforced.
The decision of whether and how to design, deploy, and monitor algorithmic systems should involve far more than legal obligations, bringing in questions of ethics, distributive justice, agency and more. Nevertheless, the legal framework provides relevant context for such conversations, and illuminates key areas where further research and analysis is required.
10:15 a.m.: Poster Session + Coffee Break
List of Posters: (Poster abstracts are available on the Workshop Abstracts page.)
- “Learning to Say No: When FATE is too Late” by Rua M. Williams (University of Florida)
- “Perspectives of People with Essential Tremors on the Privacy of Adaptive Assistive Technologies” by Foad Hamidi, Kellie Poneres, Aaron Massey, Amy Hurst (University of Maryland Baltimore County and NYU)
- “AI-Assisted UI Design for Blind and Low-Vision Creators” by Venkatesh Potluri, Tadashi Grindeland, Jon E. Froehlich and Jennifer Mankoff (University of Washington)
- “Using Computational Ethnography to Enhance Curation of Real-world Data (RWD) of Individuals Living with Chronic Pain and Invisible Disability” by Rhonda J. Moore, Ross Smith, Qi Liu, Rebecca Racz, and Phaedra Boinodiris (U.S. Food and Drug Administration and University College Dublin)
- “Stranded at the Edges and Falling through the Cracks?” by Jutta Treviranus (Ontario College of Art and Design University)
- “Facial Analysis Models Do Not Perform Well on Faces of Individuals with Dementia” by Babak Taati, Azin Asgarian, Shun Zhao, Siavash Rezaei, Ahmed B. Ashraf, M. Erin Browne, Kenneth M. Prkachin, Alex Mihailidis, and Thomas Hadjistavropoulos (University of Toronto)
- “Artificial Intelligence: The Importance of Labels and Diversity among Deaf and Hard of Hearing People” by Raja Kushalnagar (Gallaudet University)
- “Fairness in Data Collection to Train Machine Learning Models for Persons with Disabilities in Africa” by Jefferson Sankara (Lori Systems LTD)
10:45 a.m.: Short Talks (x5) (Talk abstracts are available on the Workshop Abstracts page.)
- “Artificial Intelligence Fairness in the Context of Accessibility Research on Intelligent Systems for People who are Deaf or Hard of Hearing” by Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, and Matt Huenerfauth (Golisano College of Computing & Information Sciences and Rochester Institute of Technology)
- “What is the Point of Fairness? Disability, AI and The Complexity of Justice” by Cynthia Bennett and Os Keyes (University of Washington)
- “Distributive Justice and Disability in Machine Learning” by Alan Lundgard (MIT)
- “Artificial Intelligence and the Dignity of Risk” by Emily Shea Tanis and Clayton Lewis (Coleman Institute for Cognitive Disabilities)
- “Fairness of AI for People with Disabilities: Problem Analysis and Interdisciplinary Collaboration” by Jason J.G. White (Educational Testing Service)
12:00 p.m.: Catered Lunch (Sponsored by IBM Research)
1:30 p.m.: Short Talks (x4) (Talk abstracts are available on the Workshop Abstracts page.)
- “Fairness Issues in AI Systems that Augment Sensory Abilities” by Leah Findlater, Steven Goodman, Yuhang Zhao, Shiri Azenkot, and Margot Hanley (University of Washington and Cornell Tech)
- “Toward Fairness in AI for People with Disabilities: A Research Roadmap” by Anhong Guo, Ece Kamar, Jennifer Wortman Vaughan, Hanna Wallach, and Meredith Ringel Morris (Microsoft Research and Carnegie Mellon University)
- “Designing Accessible, Explainable AI (XAI) Experiences” by Christine T. Wolf and Kathryn E. Ringland (IBM Research Almaden and Northwestern University)
- “Unintended Machine Learning Biases as Social Barriers for Persons with Disabilities” by Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Yu Zhong, and Stephen Denuyl (Google)
2:30 p.m.: Coffee Break
2:45 p.m.: Breakout Sessions (Topics TBD based on Workshop Attendees’ Interests)
3:45 p.m.: Closing Remarks from Workshop Organizers
4:00 p.m.: Workshop End
Note: There will be a related evening event attendees may wish to sign up for. This is not part of the workshop. Information is below:
7:00 p.m.: SIGACCESS Sponsored Event: Project Amelia, an immersive theater experience based around the launch of a groundbreaking new AI product. Registration for this event is now open. Workshop and ASSETS 2019 attendees should have received a registration link by email. If you are missing this email, please contact aiworkshop-assets19@acm.org.