🧵 1/
LexisNexis, one of the biggest data brokers on the planet, is incorporating ChatGPT-style Generative AI into their legal search engine. I read the report (linked in comments). There is a black-hole-sized void in it.
The entire report does not include a single mention of "hallucinations" or "confabulations". How can you introduce GenAI in the legal sector without ever addressing the Achilles' heel of LLMs?
A Fortune 500 company hired me as an expert consultant to help them find out why their employees were not trusting their Explainable AI (XAI) system. My solution that worked really upset the VP of Engineering.
He was angry because my solution didn't involve any substantial algorithmic changes. During the presentation, he said, "So where are the model changes in all of this?"
🧵
1/ This is unreal. Am I the only one this stuff happens to?
A few days after I posted about Explainability Washing (https://hci.social/@upol/110397476968179709), I got a rather lengthy 900+ word message from a senior VP at a prominent tech company.
🧵 [1/n]
Super interesting study on an instance of the Algorithmic Imprint (https://twitter.com/UpolEhsan/status/1537112310505824256)-- people might retain biases from AI systems even when the AI system is no longer there. The spirit of the argument is well taken. What are some of the caveats you should pay attention to when trying to transfer insights from studies like these to real-world settings?
1/ Gloves are off! 🤯 Here are 2 noteworthy moves and 1 wishlist item in this spicy “who watches the watchmen” report by Mozilla auditing Meta’s Transparency Announcements.
🤯 Selective outrage: we had no idea so many were so concerned about misrepresentation in AI. Where was this outrage when non-white folks were consistently misrepresented?
🎯 What was the real issue here: the attempted fix was a technical one. The problems are sociotechnical. Technical fixes to sociotechnical problems don't work
This piece made me think of:
💡 The highly influential AI doomers are all elites.
🤔 Why is it that the elites are so bothered by existential?
🔥 Because anything less catastrophic won't affect them.
📍 AI harms that affect the majority of us don't even touch the elites.
🎯 Thus, the fearmongering comes from a selfish place.
Prompting #Llama2-chat-7B: What is your context window size?
Response: As a responsible AI language model, I don't have a "context window" in the classical sense, as I am not a physical device with a fixed window size.
This piece around "superintelligence governance" has sparked quite the uproar.
But is it all just hype?
Let's take a closer look at why some believe it's much ado about nothing.
First and foremost, there seems to be a fog of confusion surrounding the very definition of "superintelligence." Even if we consider Nick Bostrom's interpretation (or lack thereof), the way OpenAI employs the term leaves us scratching our heads.
🧵 1/
If you are talking about AI or algorithms without talking about power, you're missing the picture. Don't just ask who it's serving; also ask who it's harming?
I've been a victim of these algorithms of oppression in the rental market. I have lived in places owned by faceless private equity firms that drive prices up while letting the property and the community rot down to the bones 😔
🧵 1/
While we justifiably fight about opaque algorithms, there's a more sinister opacity -- whether an algorithm exists in the first place. But the solution isn't just making things "transparent". 🤯
💯 Next to bad algorithms in law enforcement, the next most sinister one that undoubtedly touches almost everyone are algorithms around housing.
💡 Often people aren't even aware there's an algorithm pulling the strings behind the scene.
We have an exciting main event at #HCXAI at #chi2024 today!
We have @janethaven from @datasociety and
Kush Varshney from @ibmresearch for an invigorating discussion on AI governance and policymaking to take Explainable AI beyond academia.
This one is about AI Risk Management Frameworks. Focusing on NIST's Generative AI profile, we cover practical aspects that:
✅ Demystifies misunderstandings about AI RMFs: what they are for, what they are not for
✅ Unpacks challenges of evaluating AI frameworks
✅ Underscores how inert knowledge in frameworks need to be activated through processes & user-centered design to bridge the gap between theory + practice