AI superiority at describing images, not so alleged?
Could it be that AI can image-describe circles even around me? And that the only ones whom my image descriptions satisfy are Mastodon's alt-text police?
Artikel ansehen
Zusammenfassung ansehen
I think I've reached a point at which I only describe my images for the alt-text police anymore. At which I only keep ramping up my efforts, increasing my description quality and declaring all my previous image descriptions obsolete and hopelessly outdated only to have an edge over those who try hard to enforce quality image descriptions all over the Fediverse, and who might stumble upon one of my image posts in their federated timelines by chance.
For blind or visually-impaired people, my image descriptions ought to fall under "better than nothing" at best and even that only if they have the patience to have them read out in their entirety. But even my short descriptions in the alt-text are too long already, often surpassing the 1,000-character mark. And they're often devoid of text transcripts due to lack of space.
My full descriptions that go into the post are probably mostly ignored, also because nobody on Mastodon actually expects an image description anywhere that isn't alt-text. But on top of that, they're even longer. Five-digit character counts, image descriptions longer than dozens of Mastodon toots, are my standard. Necessarily so because I can't see it being possible to sufficiently describe the kind of images I post in significantly fewer characters, so I can't help it.
But it isn't only about the length. It also seems to be about quality. As @Robert Kingett, blind points out in this Mastodon post and this blog post linked in the same Mastodon post, blind or visually-impaired people generally prefer AI-written image descriptions over human-written image descriptions. Human-written image descriptions lack effort, they lack details, they lack just about everything. AI descriptions, in comparison, are highly detailed and informative. And I guess when they talk about human-written image descriptions, they mean all of them.
I can upgrade my description style as often as I want. I can try to make it more and more inclusive by changing the way I describe colours or dimensions as much as I want. I can spend days describing one image, explaining it, researching necessary details for the description and explanation. But from a blind or visually-impaired user's point of view, AI can apparently write circles around that in every way.
AI can apparently describe and even explain my own images about an absolutely extreme niche topic more accurately and in greater detail than I can. In all details that I describe and explain, with no exception, plus even more on top of that.
If I take two days to describe an image in over 60,000 characters, it's still sub-standard in terms of quality, informativity and level of detail. AI only takes a few seconds to generate a few hundred characters which apparently describe and explain the self-same image at a higher quality, more informatively and at a higher level of detail. It may even be able to not only identify where exactly an image was created, even if that place is only a few days old, but also explain that location to someone who doesn't know anything about virtual worlds within no more than 100 characters or so.
Whenever I have to describe an image, I always have to throw someone in front of the bus. I can't perfectly satisfy everyone all the same at the same time. My detailed image descriptions are too long for many people, be it people with a short attention span, be it people with little time. But if I shortened them dramatically, I'd have to cut information to the disadvantage of not only neurodiverse people who need things explained in great detail, but also blind or visually-impaired users who want to explore a new and previously unknown world through only that one image, just like sighted people can let their eyes wander around the image.
Apparently, AI is fully capable of actually perfectly satisfying everyone all the same at the same time because it can convey more information with only a few hundred characters.
Sure, AI makes mistakes. But apparently, AI still makes fewer mistakes than I do.
#AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI
For blind or visually-impaired people, my image descriptions ought to fall under "better than nothing" at best and even that only if they have the patience to have them read out in their entirety. But even my short descriptions in the alt-text are too long already, often surpassing the 1,000-character mark. And they're often devoid of text transcripts due to lack of space.
My full descriptions that go into the post are probably mostly ignored, also because nobody on Mastodon actually expects an image description anywhere that isn't alt-text. But on top of that, they're even longer. Five-digit character counts, image descriptions longer than dozens of Mastodon toots, are my standard. Necessarily so because I can't see it being possible to sufficiently describe the kind of images I post in significantly fewer characters, so I can't help it.
But it isn't only about the length. It also seems to be about quality. As @Robert Kingett, blind points out in this Mastodon post and this blog post linked in the same Mastodon post, blind or visually-impaired people generally prefer AI-written image descriptions over human-written image descriptions. Human-written image descriptions lack effort, they lack details, they lack just about everything. AI descriptions, in comparison, are highly detailed and informative. And I guess when they talk about human-written image descriptions, they mean all of them.
I can upgrade my description style as often as I want. I can try to make it more and more inclusive by changing the way I describe colours or dimensions as much as I want. I can spend days describing one image, explaining it, researching necessary details for the description and explanation. But from a blind or visually-impaired user's point of view, AI can apparently write circles around that in every way.
AI can apparently describe and even explain my own images about an absolutely extreme niche topic more accurately and in greater detail than I can. In all details that I describe and explain, with no exception, plus even more on top of that.
If I take two days to describe an image in over 60,000 characters, it's still sub-standard in terms of quality, informativity and level of detail. AI only takes a few seconds to generate a few hundred characters which apparently describe and explain the self-same image at a higher quality, more informatively and at a higher level of detail. It may even be able to not only identify where exactly an image was created, even if that place is only a few days old, but also explain that location to someone who doesn't know anything about virtual worlds within no more than 100 characters or so.
Whenever I have to describe an image, I always have to throw someone in front of the bus. I can't perfectly satisfy everyone all the same at the same time. My detailed image descriptions are too long for many people, be it people with a short attention span, be it people with little time. But if I shortened them dramatically, I'd have to cut information to the disadvantage of not only neurodiverse people who need things explained in great detail, but also blind or visually-impaired users who want to explore a new and previously unknown world through only that one image, just like sighted people can let their eyes wander around the image.
Apparently, AI is fully capable of actually perfectly satisfying everyone all the same at the same time because it can convey more information with only a few hundred characters.
Sure, AI makes mistakes. But apparently, AI still makes fewer mistakes than I do.
#AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI
Konversationsmerkmale
Lädt...