ACM ASSETS 2019 Workshop on AI Fairness for People with Disabilities

Sponsored by IBM Research and IBM Design

THIS WORKSHOP IS CLOSED AND FULLY BOOKED.

Position papers from the workshop will appear in the October 2019 and June 2020 issues of the SIGACCESS Newsletter. Links to the full articles are provided below.

Papers to be presented as Short Talks

Artificial Intelligence Fairness in the Context of Accessibility Research on Intelligent Systems for People who are Deaf or Hard of Hearing
by Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, and Matt Huenerfauth (Golisano College of Computing & Information Sciences and Rochester Institute of Technology)

Abstract: We discuss issues of Artificial Intelligence (AI) fairness for people with disabilities, with examples drawn from our research on human-computer interaction (HCI) for AI-based systems for people who are Deaf or Hard of Hearing (DHH). In particular, we discuss the need for inclusion of data from people with disabilities in training sets, the lack of interpretability of AI systems, ethical responsibilities of access technology researchers and companies, the need for appropriate evaluation metrics for AI-based access technologies (to determine if they are ready to be deployed and if they can be trusted by users), and the ways in which AI systems influence human behavior and influence the set of abilities needed by users to successfully interact with computing systems.

What is the Point of Fairness? Disability, AI and The Complexity of Justice
by Cynthia Bennett and Os Keyes (University of Washington)

Abstract: Work integrating conversations around AI and Disability is vital and valued, particularly when done through a lens of fairness. Yet at the same time, analyzing the ethical implications of AI for disabled people solely through the lens of a singular idea of "fairness" risks reinforcing existing power dynamics, either through reinforcing the position of existing medical gatekeepers, or promoting tools and techniques that benefit otherwise-privileged disabled people while harming those who are rendered outliers in multiple ways. In this paper we present two case studies from within computer vision - a subdiscipline of AI focused on training algorithms that can "see" - of technologies putatively intended to help disabled people but, through failures to consider structural injustices in their design, are likely to result in harms not addressed by a “fairness” framing of ethics. Drawing on disability studies and critical data science, we call on researchers into AI ethics and disability to move beyond simplistic notions of fairness, and towards notions of justice.

Distributive Justice and Disability in Machine Learning
by Alan Lundgard (MIT)

Abstract: How should algorithmic systems fairly accommodate people with disabilities (PWD)? What benefits---goods, services, or opportunities---should PWD receive, responsive to burdens imposed by these systems? Which accommodations are reasonable, fair, and just? Traditionally, such questions have been addressed by distributive justice, the area of political philosophy that proposes principles for the fair distribution of benefits and burdens across members of society. Today, questions of distributive justice for PWD have new urgency. Algorithmic systems have encoded biases against protected groups, as in the unfair denial of employment opportunities for women. Just as gender and racial minority groups are legally protected---meaning discrimination on the basis of protected group membership is against the law---so too are PWD. Yet PWD have long been denied employment opportunities for which they are qualified, and algorithmic systems could encode these biases. Fair machine learning research aims to reduce algorithmic bias by operationalizing principles of distributive justice. Accordingly, when operationalizing these principles, it is important to address whether they accommodate all protected groups. This is especially true of PWD, who present distinct challenges to principles of distributive justice that have already been operationalized, such as Rawls's difference principle, and equality of opportunity. This paper presents disability-informed challenges to these principles, contextualized in algorithmic systems used for hiring and employment.

Artificial Intelligence and the Dignity of Risk
by Emily Shea Tanis and Clayton Lewis (Coleman Institute for Cognitive Disabilities)

Abstract: The increased use of AI-based systems poses risks and opportunities for people with cognitive disabilities. On the one hand, automated administrative systems, such as job applicant screeners, may disadvantage people whose patterns of strengths and weaknesses, and whose life circumstances, differ from those commonly seen in pools of data. On the other hand, people with cognitive disabilities stand to gain from AI’s potential to provide superior support, such as speaker-dependent speech recognition. Further, privacy concerns are heightened, both because of greater likelihood that people with uncommon combinations of attributes can be identified from their data, and because of the potential for discrimination and exploitation. It is important that people with disabilities are able to make self-directed choices about the tradeoffs among risks and benefits, not denying them the dignity of risk that others have. Enabling this calls for advances both in technology and in organization.

Fairness of AI for People with Disabilities: Problem Analysis and Interdisciplinary Collaboration
by Jason J.G. White (Educational Testing Service)

Abstract: There are several respects in which recent developments in machine learning-based artificial intelligence pose challenges of fairness for people with disabilities. In this presentation, and in the accompanying position paper, some of the central problems are identified, and briefly reviewed from a philosophical perspective motivated by a broad concern for social justice, emphasizing the role of considerations of ethics in informing the problem analysis.

Fairness Issues in AI Systems that Augment Sensory Abilities
by Leah Findlater, Steven Goodman, Yuhang Zhao, Shiri Azenkot, and Margot Hanley (University of Washington and Cornell Tech)

Abstract: Systems that augment sensory abilities are increasingly employing AI and machine learning (ML) approaches, with applications ranging from object recognition and scene description tools for blind users to sound awareness tools for d/Deaf users. However, unlike many other AI-enabled technologies, these systems provide information that is already available to non-disabled people. In this paper, we discuss unique AI fairness challenges that arise in this context, including accessibility issues with data and models, ethical implications in deciding what sensory information to convey to the user, and privacy concerns both for the primary user and for others.

Toward Fairness in AI for People with Disabilities: A Research Roadmap
by Anhong Guo, Ece Kamar, Jennifer Wortman Vaughan, Hanna Wallach, and Meredith Ringel Morris (Microsoft Research and Carnegie Mellon University)

Abstract: AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD). Indeed, improving the lives of PWD is a motivator for many state-of-the-art AI systems, such as automated speech recognition tools that can caption videos for people who are deaf and hard of hearing, or language prediction algorithms that can augment communication for people with speech or cognitive disabilities. However, widely deployed AI systems may not work properly for PWD, or worse, may actively discriminate against them. These considerations regarding fairness in AI for PWD have thus far received little attention. In this position paper, we identify potential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development, and testing. We intend for this risk assessment of how various classes of AI might interact with various classes of disability to provide a roadmap for future research that is needed to gather data, test these hypotheses, and build more inclusive algorithms. (Paper available on arXiv)

Designing Accessible, Explainable AI (XAI) Experiences
by Christine T. Wolf and Kathryn E. Ringland (IBM Research Almaden and Northwestern University)

Abstract: Explainable Artificial Intelligence (XAI) has taken off in recent years, a field that develops techniques to render complex AI and machine learning (ML) models comprehensible to humans. Despite the growth of XAI techniques, we know little about the challenges of leveraging such explainability capabilities in situated settings of use. In this position paper, we discuss some particular issues around the intersection between accessibility and XAI. We outline two primary concerns: one, accessibility at the interface; and two, tailoring explanations to individuals’ diverse and changing explainability needs. We illustrate these issues by discussing two application areas for AI/Ml systems (aging-in-place and mental health management) and discuss how issues arise at the nexus between explainability and accessibility.

Unintended Machine Learning Biases as Social Barriers for Persons with Disabilities
by Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Yu Zhong, and Stephen Denuyl (Google)

Abstract: Persons with disabilities face many barriers to full participation in society, and the rapid advancement of technology has the potential to create ever more. Building equitable and inclusive technologies for people with disabilities demands paying attention to more than accessibility, but also to how social attitudes towards disability are represented within technology. Representations perpetuated by machine learning (ML) models often inadvertently encode undesirable social biases from the data on which they are trained. This can result, for example, in text classification models producing very different predictions for "I am a person with mental illness", and "I am a tall person". In this paper, we present evidence of such biases in existing ML models, and in data used for model development. First, we demonstrate that a machine-learned model to moderate conversations classifies texts which mention disability as more "toxic". Similarly, a machine-learned sentiment analysis model rates texts which mention disability as more negative. Second, we demonstrate that neural text representation models that are critical to many ML applications can also contain undesirable biases towards mentions of disabilities. Third, we show that the data used to develop such models reflects topical biases in social discourse which may explain such biases in the models---for instance, gun violence, homelessness, and drug addiction are over-represented in discussions about mental illness.

Papers to be presented as Posters

Learning to Say No: When FATE is too Late
by Rua M. Williams (University of Florida)

Abstract: Among growing concerns about the disproportionate dangers AI advances pose to marginalized groups, proposals for a procedural solution to ethics in AI abound. As each framework for enforcing ethics has its exploits exposed, a new cog is added to the orrery. Perhaps it is time to consider that some systems may be inherently violent, even if they are fair. We are not going to program our way into justice. We have to learn to say no to building violent things.

Perspectives of People with Essential Tremors on the Privacy of Adaptive Assistive Technologies
by Foad Hamidi, Kellie Poneres, Aaron Massey, Amy Hurst (University of Maryland Baltimore County and NYU)

Abstract: Assistive technologies and accessibility applications that use machine learning and other techniques to monitor user behavior and adapt their functionality can offer benefits to people with disabilities. Despite their potential, these systems’ access to user data presents an often-overlooked privacy tradeoff between usability and disclosing ability data. In this research, we study the perspectives of end-users on the privacy aspects of these technologies using interviews in which we utilized a research probe and a series of printed graphical elements. We believe better understanding diverse end-user perspectives can inform the design of future technologies that balance improved performance with supporting user privacy.

AI-Assisted UI Design for Blind and Low-Vision Creators
by Venkatesh Potluri, Tadashi Grindeland, Jon E. Froehlich and Jennifer Mankoff (University of Washington)

Abstract: Visual aesthetics are critical to user interface (UI) design and usability. Prior work has shown that website aesthetics—which users evaluate in a ‘split second’ upon page load—are a definitive factor not just in engaging users online but also in impacting opinions about usability, trustworthiness, and overall user satisfaction. Currently, however, there is limited support for blind or low-vision (BLV) creators in designing, implementing, and/or assessing the visual aesthetics of their UI creations. In this workshop paper, we consider AI-assisted user interface design as a potential solution. We provide background on related research in AI-assisted design and accessible programming, describe two preliminary studies examining BLV users’ current understanding of UIs and their ability to represent them with lo-fi methods, and close by discussing key open areas such as supporting BLV creators throughout the UI design process.

Using Computational Ethnography to Enhance Curation of Real-world Data (RWD) of Individuals Living with Chronic Pain and Invisible Disability
by Rhonda J. Moore, Ross Smith, Qi Liu, Rebecca Racz, and Phaedra Boinodiris (U.S. Food and Drug Administration and University College Dublin)

Abstract: Chronic pain is a significant source of suffering, disability and societal cost in the US. This paper takes an intersectional approach to enhancing AI fairness arguing that computational ethnography is a method that can provide greater insight into the patient experience of living with chronic pain and invisible disability. First, we provide a brief definition of chronic pain and invisible disability, also describing limitations in existing data collection methods that curate snapshots of clinically verifiable pain and disability experience, while also rendering others as invisible, or as outliers. Then we discuss how computational ethnography as a multimodal real-world data research methodology enhances curation of inclusive intersectional knowledge bases, thereby expanding existing boundaries of how we understand AI fairness in terms of inclusiveness, bias mitigation, and transparency for disability use cases. AI, big data, and machine learning offer tremendous opportunities to improve the lives of those with chronic pain and invisible disability, but it’s important to understand that with this technology comes the responsibility for fairness and unbiased applications.

Stranded at the Edges and Falling through the Cracks?
by Jutta Treviranus (Ontario College of Art and Design University)

Abstract: While AI fairness, ethics and bias have received attention of late, the strategies to counter bias and exclusion have primarily focused on AI treatment of well-defined, protected identity groups. These strategies do not account for very small minorities and outliers or individuals that are unique and highly diverse, such as people experiencing disabilities. Filling data gaps, ensuring proportional representation, and eliminating algorithmic bias will not address the fact that majority data will always overwhelm minority data and needs (no matter how critical) in population-data-based decisions. Before attention shifts and we conclude that the issue of AI fairness has been accounted for, we need to address treatment of highly diverse, unique individuals who do not fit into defined groups; or they will encounter vicious cycles of exclusion amplified through automated decisions. Early experiments in alternative treatment of population data will be discussed.

Facial Analysis Models Do Not Perform Well on Faces of Individuals with Dementia
by Babak Taati, Azin Asgarian, Shun Zhao, Siavash Rezaei, Ahmed B. Ashraf, M. Erin Browne, Kenneth M. Prkachin, Alex Mihailidis, and Thomas Hadjistavropoulos (University of Toronto)

Abstract: Facial analysis is an AI technology that can play an important role in providing care to individuals living with dementia. For example, frequent ambient analysis of facial expressions could automatically detect expressions of pain in older adults with moderate to severe dementia who might otherwise be unable to communicate their pain. The use of such technology, however, requires that facial analysis models perform sufficiently well on faces of people with dementia. In experimental analysis with multiple facial landmark detection models, we have shown that this is not the case. Facial landmark detection models perform significantly worse when evaluated on faces of older adults with dementia vs. cognitively healthy older adults. The bias exists in all facial regions and when analysing either front view or profile view images or videos. The findings highlight that careful attention to issues of bias and fairness is needed when developing and applying AI solutions to healthcare applications.

Artificial Intelligence: The Importance of Labels and Diversity among Deaf and Hard of Hearing People
by Raja Kushalnagar (Gallaudet University)

Abstract: The rapid growth in Artificial Intelligence applications and their associated datasets means that the creators face increasing difficulties in ensuring that their algorithms continue to be fair, accountable, transparent and ethical. This paper explores the characteristics of deaf and hard of hearing people and communities and analyzes how these characteristics can influence the fairness, accountability, transparency and ethics of current and future AI systems.

Fairness in Data Collection to Train Machine Learning Models for Persons with Disabilities in Africa
by Jefferson Sankara (Lori Systems LTD)

Abstract: In this paper, I endeavor to expose the position of fairness in data collection to train machine learning model for persons with disability in Africa. I argue that the increasing applications of Artificial Intelligence (AI) in society increases chances of bias for persons with disabilities (PWDs), and agree that fairness for people with disabilities is different to fairness for other protected attributes such as age, gender or race. This difference is more pronounced in Africa than anywhere else. Trewin details the difficulty of accessing disability information due to its sensitive aspect which is a result of avoiding the potential for discrimination. In addition, I expose how this challenge is compounded in Africa due to our cultural norms. I evaluate the effects of discrimination to PWDs based on lessons learnt from other categories of discrimination and finally suggest ways that can introduce fairness while collecting data to be used to train Machine Learning (ML) models that can be used by PWDs in Africa.