Has anyone written about how textual generative AI feels strangely close to toxic masculinity in some respects? The absolute confidence in everything stated, the lack of understanding of the consequences of getting that confidence wrong for important questions, the semi-gaslighty feeling when it “corrects” itself when you call it out on something. It so often feels like talking to someone one would despise and avoid in “real life.” I’m curious if anyone did some writing on this.
#Judges are not supposed to give any impression of #bias, yet the #flag could be seen as telegraphing #Alito’s views….“We all have our biases, but the good judge fights against them,” said Charles Geyh, a #law prof at IU. “When a judge celebrates his predispositions by hoisting them on a flag…that’s deeply disturbing.”
Records show that the Alitos have owned the beach house since 2014, & he is a well-known presence in the waterfront community. Residents…recalled seeing the justice last summer….
CBC has whitewashed Israel’s crimes in Gaza. I saw it firsthand
Working for five years as a producer at the public broadcaster, I witnessed the double standards and discrimination in its coverage of Palestine—and experienced directly how CBC disciplines those who speak out
Activism in the Shadow of Cognitive Biases and Confirmation Bias
Discover how cognitive biases and confirmation bias affect social activism. This article sheds light on the challenges faced by activists in the digital age, analyzing the case of Adam Pustelnik and emphasizing the importance of accurate information interpretation in the fight against disinformation.
"A handful of powerful businessmen pushed New York City Mayor Eric Adams to use police to crack down on pro-Palestinian student protesters at #Columbia University, donating to the politician and offering to pay for private investigators to help break up the demonstrations, based on leaked WhatsApp conversations"
Watched something on #evidence in education, someone said "Think #Bias!" and the subtitles rendered this as "think by ass" and I will never not hear those three words every time I open a research paper.
Although companies have created detectors to help spot #deepfakes, studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted.
A team of researchers discovered new methods that improve both the fairness and the accuracy of these detection algorithms by teaching them about human diversity
"Chatbots share limited information, reinforce ideologies, and, as a result, can lead to more polarized thinking when it comes to controversial issues, according to new Johns Hopkins University–led research. The study challenges perceptions that chatbots are impartial and provides insight into how using conversational search systems could widen the public divide on hot-button issues and leave people vulnerable to manipulation."
"Een ander punt dat het College noemt is dat een groot deel van het aanbod afkomstig is van grote techbedrijven zoals Apple, Google, en Microsoft. "De invloed van deze bedrijven kan sturend zijn en kan scholen van hun producten afhankelijk maken, ..."