@Sonikku@davidnjoku@lisamelton@_L1vY_ This example also demonstrates how stupid (as in “not intelligent”) LLMs truly are. It doesn’t know that “black” refers to “doctor” or “Caucasian” refers to “child” in the prompt. The relative distances between tokens help guide it to results, but that’s it, and since its training data almost certainly has way more examples of the opposite situation in it, that’s what it gravitates toward. It will always reflect training biases.
@davidnjoku Some months ago when everyone was making jokes about the mythos of "John Mastodon" as if he were a real founder, there were a lot of tall-tale style AI images of him posted. They were universally white. I tried for about two hours to force Dall-e to give me a Black #JohnMastodon but it would only gave me weird garbage images that were subpar, poorly drawn, and basic.
@davidnjoku I often tell people to watch out for subtle biases in AI-generated content. It helps to start with examples like this and then say something like, "If it will do something this blatant, imagine what you might be missing if you're not paying close attention."
I'm sorry this exists, but also glad to have another example.
@grammargirl@davidnjoku Unless the engineers have explicitly coded for this, the biases are in society and the data used to train the model. It's rarely the "fault" of AI when it does this stuff and is really a reflection of how screwed up we are as a society. If Black doctors were frequently photographed treating White children, the AI would be much less likely to have this issue. AI is just making apparent societal issues.
@blterrible@grammargirl I fear that too many of us who love technology can be so willing to understand/explain its failings that we fail to hold it to even minimum standards, fail to force it to aim higher.
No one says, Accidents happen regularly so don't expect your plane to stay in the air.
@davidnjoku
I also noticed that it didn't show any female doctors.
About that...
I listened to a podcast where a white woman admitted that when her mother read her the book series "Little House on the Prairie" and it mentioned that a black doctor took care of them when the family was sick with cholera, her mother edited out the color of the doctor because she didn't want to be "divisive".
Thus teaching another little girl to hold on to racist stereotypes that black people can't be doctors.
@LazaroDTormes@Rozzychan@davidnjoku Ohhhh no, you’re on the PILOT! Absolutely nothing bad happens to Dr Kyle. He was on loan to B5 from Earth (translation, I think there was just too much time between the Pilot and the first season and they couldn’t keep the actor - so no spoilers 😀)
@Osteopenia_Powers@Rozzychan@davidnjoku Yes, following this well-respected* formula:
I know the image says “right wing” but there are plenty of “well meaning” folks who follow this logic.
Also apparently there’s a book called “Prairie fires” that dives into some really messed up stuff about the actual Ingalls family. “Pa” was apparently a real piece of work - but I’ll maintain my Michael Landon enjoyment. 😂
@davidnjoku Have you tried other racial combinations? From what I've seen, current models don't really understand that "Black African doctor" or "White Caucasian child" are "atomic" things. At best you can say that it sort of understands that those words are related because of proximity. The system is also built on statistical models that match a criteria, so depending on your dataset, all doctors might end up looking Chinese because that's what the data contains. Data curation is important.
@davidnjoku@blterrible K, so I got curious (and woke up at 5am for some reason) and I tried “Kenyan* doctor treating white starving child” and not ONE of the children was white and half the doctors were white. That’s bullshit.
Meanwhile, I typed in
“Unicorn doctor treating a starving white child” and it messed that up too, but no black children
So, unicorns it can handle….
*I went with Kenya because I didn’t want to give it the “out” of white South Africans
@davidnjoku@blterrible Also tried this one, but it said it was “unsafe content” so at least AI/LLM recognize the dangers posed by the Elder Gods.
🤷🏻♀️
“I can excuse racism, but I draw the line at awakening that which sleeps” - Dall-E
@Wraithe@davidnjoku Again, it really doesn't understand that "white child" or "black child" is an atomic thing. Try this again and note the color of other things in the images returned. You're much more likely to white cars or white walls in the background of an image that specifies a "white child". Also, beating a dead horse here, the generator is driven by the content it is fed. It doesn't have a preponderance of white doctors in the data affecting the outcome when the doctor is a unicorn.
@clive That's what I thought at first, but I'm not convinced that that fully explains it. If I asked it for a polkadot alien doctor looking after a pineapple shaped semaphod on Mars, it would do it. So why is its imagination suddenly restricted now?
Add comment