In December, the US Census proposed changes to how it categorizes disability. If implemented, the changes would have slashed the number of Americans who are counted as disabled, when experts say that disabled people are already undercounted.
The Census opened its proposal to public comment; anyone can submit a comment on a federal agency rulemaking on their own. But in this specific case, the people who were most affected by the proposal had more obstacles in the way of giving their input.
“It was really important to me to try to figure out how to enable those folks as best I could to be able to write and submit a comment,” said Matthew Cortland, a senior fellow at Data for Progress. With that in mind, they created a GPT-4 bot assistant for people who wanted to submit their own comments. Cortland has run commenting campaigns targeting disability-related regulations in the past, but this was their first with the assistance of AI.
“Thank you, this enabled me to produce the kind of comment I’ve always wanted to produce,” one person told them. “There’s too much brain fog for me to do this right now.”
Depending on who’s counting, 12.6 percent or even 25 percent of the population has disabilities. Disability itself is defined in myriad ways, but broadly encompasses physical, intellectual, and cognitive impairments along with chronic illnesses; a person with physical disabilities may use a wheelchair, while a severe, energy-limiting illness such as long covid might make it challenging for people to manage tasks of daily living.
AI — whether in the form of natural language processing, computer vision, or generative AI like GPT-4 — can have positive effects on the disability community, but generally, the future of AI and disability is looking fairly grim.
“The way that AI is often kind of dealt with and used is essentially phrenology with math,” says Joshua Earle, an assistant professor at the University of Virginia who connects the history of eugenics with technology. People who are unfamiliar with disability have negative views shaped by media, pop culture, regulatory frameworks, and the people around them, viewing disability as a deficit rather than a cultural identity. A system that devalues disabled lives by custom and design is one that will continue to repeat those errors in technical products.
“The way that AI is often kind of dealt with and used is essentially phrenology with math”
This attitude is sharply illustrated in the debates over care rationing at the height of the covid-19 pandemic. It also shows up in the form of quality-adjusted life years (QALYs), an AI-assisted “cost effectiveness” tool used in health care settings to determine “quality of life” via external metrics, not the intrinsic value of someone’s life. For example, the inability to leave the home might be counted as a point against someone, as would a degenerative illness that limits physical activity or employability. A low score may result in rejection of a given medical intervention in cost-benefit analyses; why engage in costly treatments for someone deemed likely to live a shorter life marred by disability?
The promise of AI is that automation will make work easier, but what exactly is being made easier? In 2023, a ProPublica investigation revealed that insurance giant Cigna was using an internal algorithm that automatically flagged coverage claims, allowing doctors to sign off on mass denials, which disproportionately targeted disabled people with complex medical needs. The health care system is not the only arena in which algorithmic tools and AI can function against disabled people. It’s a growing commonality in employment, where tools to screen job applicants can introduce biases, as can the logic puzzles and games used by some recruiters, or the eye and expression tracking that accompanies some interviews. More generally, says Ashley Shew, an associate professor at Virginia Tech who specializes in disability and technology, it “feeds into extra surveillance on disabled people” via technologies that single them out.
Technologies such as these often rely on two assumptions: that many people are faking or exaggerating their disabilities, making fraud prevention critical, and that a life with disability is not a life worth living. Therefore, decisions about resource allocation and social inclusion — whether home care services, access to the workplace, or ability to reach people on social media — do not need to view disabled people as equal to nondisabled people. That attitude is reflected in the artificial intelligence tools society builds.
It doesn’t have to be this way.
Cortland’s creative use of GPT-4 to help disabled people engage in the political process is illustrative of how, in the right hands, AI can become a valuable accessibility tool. There are countless examples of this if you look in the right places — for instance, in early 2023, Midjourney released a feature that would generate alt text for images, increasing accessibility for blind and low-vision people.
Amy Gaeta, an academic and poet who specializes in interactions between humans and technology, also sees potential for AI that “can take really tedious tasks for [disabled people] who are already overworked, extremely tired” and automate them, filling out forms, for example, or offering practice conversations for job interviews and social settings. The same technologies could be used for activities such as fighting insurance companies over unjust denials.
“The people who are going to be using it are probably going to be the ones who are best suited to understanding when it’s doing something wrong,” remarks Earle in the context of technologies developed around or for, but not with, disabled people. For a truly bright future in AI, the tech community needs to embrace disabled people from the start as innovators, programmers, designers, creators, and, yes, users in their own right who can materially shape the technologies that mediate the world around them.
https://www.theverge.com/24066641/disability-ableism-ai-census-qalys