TY - GEN
T1 - Examining and Mitigating Ability-bias in LLMs via Self-Reflection
AU - Iyer, Neel
AU - Jha, Akshita
AU - Pradhan, Alisha
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/10/15
Y1 - 2025/10/15
N2 - Large language models (LLMs) (e.g., ChatGPT) are rapidly integrating into our daily lives, fundamentally shaping how we engage with, process information or make decisions. Despite their significant potential, LLMs can encode social biases (e.g., gender, culture) that amplify problematic and stereotypical representations of marginalized groups. Given the discriminatory impact that bias in LLMs can have on people with disabilities, in this work we examine ability bias in LLMs. We analyze LLM responses to a set of carefully crafted prompts across different abilities, and explore self-reflection through prompt chaining as a debiasing approach. Our findings surface linguistic associations encoded in LLMs with different disabilities. We note the types of justifications or rationalizations provided as explanations in LLM responses - which has implications on the trust associated with LLM responses. Our proposed approach of model self-reflection demonstrates improvement in LLM responses and thereby contributes to debiasing literature.
AB - Large language models (LLMs) (e.g., ChatGPT) are rapidly integrating into our daily lives, fundamentally shaping how we engage with, process information or make decisions. Despite their significant potential, LLMs can encode social biases (e.g., gender, culture) that amplify problematic and stereotypical representations of marginalized groups. Given the discriminatory impact that bias in LLMs can have on people with disabilities, in this work we examine ability bias in LLMs. We analyze LLM responses to a set of carefully crafted prompts across different abilities, and explore self-reflection through prompt chaining as a debiasing approach. Our findings surface linguistic associations encoded in LLMs with different disabilities. We note the types of justifications or rationalizations provided as explanations in LLM responses - which has implications on the trust associated with LLM responses. Our proposed approach of model self-reflection demonstrates improvement in LLM responses and thereby contributes to debiasing literature.
KW - ability bias
KW - debiasing
KW - large language models
KW - self-reflection
UR - https://www.scopus.com/pages/publications/105022162099
UR - https://www.scopus.com/pages/publications/105022162099#tab=citedBy
U2 - 10.1145/3744257.3744268
DO - 10.1145/3744257.3744268
M3 - Conference contribution
AN - SCOPUS:105022162099
T3 - W4A 2025 - Proceedings of the 22nd International Web for All Conference
SP - 29
EP - 35
BT - W4A 2025 - Proceedings of the 22nd International Web for All Conference
PB - Association for Computing Machinery, Inc
T2 - 22nd International Web for All Conference, W4A 2025
Y2 - 28 April 2025 through 29 April 2025
ER -