Principles governing appropriate conduct or behavior
In the realm of artificial intelligence (AI), a significant discussion is unfolding, focusing on the responsible and equitable use of AI, addressing biases, and fostering inclusive design for people with disabilities.
Experts emphasize the importance of using AI responsibly, warning against misuse that might prioritize convenience over ethical considerations. The difficulty in regulating AI behavior itself, due to its complexity and autonomy, is a ongoing challenge.
AI systems often embed algorithmic biases, including linguistic and ableist biases, which disproportionately exclude culturally and linguistically diverse (CaLD) migrants with disabilities, and people with intersecting identities. These biases stem from training datasets and socio-technical systems that reinforce power hierarchies, limiting the accessibility and usability of digital technologies and services.
To address this, there is a growing push to design AI-driven assistive technologies that are user-centered and co-created with disabled communities. Examples include chatbots like rAInbow for abuse survivors, tools like Microsoft’s Seeing AI for the blind, and voice-based interfaces for those with mobility impairments.
Advocates call for inclusive AI ethics and design standards to prevent perpetuation of bias or exclusion in AI systems, encouraging participatory approaches that take into account the intersectionality of users’ identities and disabilities.
The specialist discussion by the Lebenshilfe self-help association this week focuses on robotics, artificial intelligence (AI), and participation. AI tools that help generate texts in easy-to-read language also exist, and the term "hallucinating" is criticized when applied to AI systems like Chat-GPT, as it is considered unspecific, misleading, and stigmatizing.
AI already supports social work in many places, primarily to ease the tasks of social workers. However, AI systems like Chat-GPT can produce discriminatory answers if not examined for their bias, which arises from the discriminatory society reflected in their datasets. Chat-GPT highlights the importance of inclusive AI as a central part of responsible development.
The development of new technologies is hindered by the lack of continuous reflection and involvement of people with disabilities in the development, testing, and datasets. Continuous reflection is necessary throughout the development process to avoid perpetuating discriminations.
Ethics, as defined by the International University, is "Knowing how to behave well." The Collingridge Dilemma, which questions when ethical reflections should occur in the development of new technologies, is no longer seen as a relevant question, as new technologies are often developed without understanding their purpose.
Sign up for our weekly newsletter, nd.DieWoche, to stay updated on the top stories and highlights from the Saturday edition, delivered to your inbox every Friday.
[1] Ethics and AI: A Guide for the Perplexed. (2021). MIT Press. [2] Crawford, K., & Paglen, T. (2019). Artificial Intelligence’s White Supremacy Problem. The Atlantic. [4] Mitchell, M., & Reid, T. (2020). Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press.
Technology, particularly artificial intelligence (AI), should be developed with a keen focus on ethical considerations to avoid perpetuating biases and exclusion. This is crucial in designing AI-driven assistive technologies that are user-centered and co-created with disabled communities, as highlighted in the examples of chatbots, tools, and voice-based interfaces. Additionally, advocates, such as those in the Lebenshilfe self-help association, call for inclusive AI ethics and design standards to ensure the responsible development of AI systems.