Artificial Intelligence (AI) #38
Replies: 4 comments 4 replies
-
@rachaelbradley please keep me informed. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Probably more as an aside, I've now heard at least twice in conversations about new success criteria something to the effect of "We won't have to worry about this anymore, as AI will do this for users automagically". I'd caution that AI (as in machine learning / ML) will still need actual correct reference material that it's trained on in order to attempt to magically correct content into the right shape (e.g. turning content into actual headings when it's not marked up as such), and for that to happen, there will still need to be guidelines/criteria that define what the correct content actually needs to look like so that appropriate training data can be created for these learning models (because if we just rely on ML trained on the state of the wild web as it is today ... we'll just end up with inaccessible output that just mimics the inaccessible state of the web today) |
Beta Was this translation helpful? Give feedback.
-
Artificial Intelligence (AI) #38 Some Resources:
|
Beta Was this translation helpful? Give feedback.
-
A few thoughts on AI (random thoughts, just to spark discussion): 1. AI generated content (dynamic)Any AI generated content that is generated "on the fly" is likely to be impossible to test in any meaningful way. For example, the content may pass a test 9/10 times but then "hallucinate" (offer a completely incorrect and even non-sensical and unrelated answer) on the 10th generation and create something completely inaccessible. Note that "hallucinations" can happen even if the input is identical each time due to the nature of how AI models work. As such dynamically generated AI content will need some "average score" or "number of tests passed" or similar criterion, or be excluded entirely. note: for clarity when I say dynamically, I mean on each page load or each action AI generated content is produced, for example an AI powered chat application that uses formatting, in order to test that formatting is valid is nearly impossible and at best could be performed as an average using automated tooling. An additional thought here is as an external tester, this is especially difficult to test as it would rely on having the appropriate tooling, or being provided with the data for 10s or 100s of examples and being able to analyse that data. 2. AI Generated content (static)Any content that is created using AI, but then only published once (i.e. a blog post is "published once" vs a response from an AI agent which may change each time and fall under the previous heading) should still fall under WCAG. I see no reason why it should be excluded, even if it is auto published. Maybe there is space for a guideline / note that all AI generated "static / single publication / whatever we call it" content is subject to the same Guidelines for clarity. 3. AI interfaces / Front-ends / GUIsThese should be fully testable and conform to WCAG etc. There should be no limitations here. 4. Identification of AI AgentsConsistent and repeated identification of AI generated content is essential, especially in the context of chat bots. A lot of people (especially those who are more vulnerable) can become attached and involved with AI generated personalities, conversations etc. Sufficient safe guards, warnings and reminders will likely be a necessary part of WCAG or similar guidance. Example: Replika is an AI "partner" that you can converse with. People who had trauma and did not feel they could speak to a person about it used the chatbot to talk about their trauma. Then they updated the model that powers the chat to be more "advertiser and investor friendly" and a lot of conversations were not possible as they contained content that was graphic, Not Safe For Work (NSFW) etc. This caused a lot of pain and distress for those users who were relying on this chat bot to process things. While the ethics of updating models etc. is outside the scope of WCAG, perhaps identification and reminder of AI agents, plus sufficient notice of potential changes to the model etc. may be applicable to protect against scenarios like this. 5. Protection from BiasAI is biased. It is only as good as the data it is fed, and our data is biased also (especially towards "western culture" for things like OpenAI / ChatGPT.)/ People would benefit from being pre-warned about biases and reminded about them regularly. Additionally there are a lot of scenarios where AI may not have been fed information about disabled user's needs etc. Additionally bias against disabled people is likely to exist. Example: An AI system that uses a webcam and allows people to virtually try on clothes. This system may not have been trained on people using wheelchairs and therefore may not recognise them, making the system unusable. Example: An AI agent is used to pre-screen job applications. This may result in biases due to "unusual" working patterns that someone with a disability may have had. If this is then used to make decisions about applicants as part of a decision process then it may add to exclusionary practices. Note: yet again I have no idea if this is under the purview of WCAG, just another random thought to spark discussion. These are just my initial ramblings / thoughts, I will have a think and may write a more robust and more well though out article / response here. |
Beta Was this translation helpful? Give feedback.
-
As a group we need to get better informed about AI. This discussion topic is a starting point. We'd like to bring in speakers when appropriate.
On this discussion thread, please share resources and experts that the group should know about.
Beta Was this translation helpful? Give feedback.
All reactions