upol, to ai
@upol@hci.social avatar

🧵 1/
LexisNexis, one of the biggest data brokers on the planet, is incorporating ChatGPT-style Generative AI into their legal search engine. I read the report (linked in comments). There is a black-hole-sized void in it.

The entire report does not include a single mention of "hallucinations" or "confabulations". How can you introduce GenAI in the legal sector without ever addressing the Achilles' heel of LLMs?

https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-announces-launch-of-lexis-ai-commercial-preview-most-comprehensive-global-legal-generative-ai-platform

upol, to ai
@upol@hci.social avatar

🧵 [1/n]

A Fortune 500 company hired me as an expert consultant to help them find out why their employees were not trusting their Explainable AI (XAI) system. My solution that worked really upset the VP of Engineering.

He was angry because my solution didn't involve any substantial algorithmic changes. During the presentation, he said, "So where are the model changes in all of this?"

Me: actually none.

Him: so what did you actually DO?

upol, (edited ) to ai
@upol@hci.social avatar

🎯 Explainable AI suffers from an epidemic. I call it Explainability Washing.

💡Think of it as window dressing—techniques, tools, or processes created to provide the illusion of explainability but not delivering it

Let’s use this hyped example from OpenAI. Title is sensational-- Language models can explain neurons in language models.

But is that the case? Let's dig in. 👇

https://openai.com/research/language-models-can-explain-neurons-in-language-models


1/n

upol, to academia
@upol@hci.social avatar

🧵 [1/n]
Why should you submit to the Human-centered Explainable AI workshop (#HCXAI) at #chi2024

Come for your love for amazing XAI research; stay for our supportive community.

That's what 300+ attendees from 18+ countries have done. Here's a snippet of what they think ⤵️

Join us and submit to the workshop! Deadline: Feb 14
hcxai.jimdosite.com

Please repost and help us spread the word. 🙏

#academia #mastodon #HCI #AI #ResponsibleAI #academicmastodon

upol, (edited ) to ai
@upol@hci.social avatar

🧵
1/ This is unreal. Am I the only one this stuff happens to?

A few days after I posted about Explainability Washing (https://hci.social/@upol/110397476968179709), I got a rather lengthy 900+ word message from a senior VP at a prominent tech company.

The note starts well, but ...

upol, to ai
@upol@hci.social avatar

🧵
1/ English is the predominant language of the Internet. So what's different about ChatGPT squishing non-English languages farther into the margins?

Key points from this piece, my reflections, and one key recommendation for those in Responsible AI:

https://www.wired.com/story/chatgpt-non-english-languages-ai-revolution/

upol, to ai
@upol@hci.social avatar

🧵 [1/n]
Super interesting study on an instance of the Algorithmic Imprint (https://twitter.com/UpolEhsan/status/1537112310505824256)-- people might retain biases from AI systems even when the AI system is no longer there. The spirit of the argument is well taken. What are some of the caveats you should pay attention to when trying to transfer insights from studies like these to real-world settings?

#AI #mastodon #responsibleAI #ML #bias

upol, to ChatGPT
@upol@hci.social avatar

Is this another version of the "synthetic users" argument?

Just don't understand the jump from performing well to...well can we just replace humans?

But y tho? 🫠😵‍💫

upol, to ai
@upol@hci.social avatar

1/ Gloves are off! 🤯 Here are 2 noteworthy moves and 1 wishlist item in this spicy “who watches the watchmen” report by Mozilla auditing Meta’s Transparency Announcements.

A 🧵 below....

https://foundation.mozilla.org/en/blog/this-is-not-a-system-card-scrutinising-metas-transparency-announcements/

upol, to Futurology
@upol@hci.social avatar

🧵

1/ During a meeting with a Radiation Oncologist about an ongoing XAI project, she said something unforgettable:

"As a RadOnc, I'm not interested in benchmark chasing and raising the ceilingr. I'm interested in raising the floor"

Me: "what do you mean by the floor?"

upol, to ai
@upol@hci.social avatar

1/
OpenAI quietly shut down its "AI" detector.

Did shutting it down undo the harms?

No, its Algorithmic Imprint lives on.

Here's how ⤵️

https://arstechnica.com/information-technology/2023/07/openai-discontinues-its-ai-writing-detector-due-to-low-rate-of-accuracy/

upol, (edited ) to ai
@upol@hci.social avatar

🧵 1/2

🚨 Is Responsible AI too woke? Episode highlights:

🤯 Selective outrage: we had no idea so many were so concerned about misrepresentation in AI. Where was this outrage when non-white folks were consistently misrepresented?

🎯 What was the real issue here: the attempted fix was a technical one. The problems are sociotechnical. Technical fixes to sociotechnical problems don't work

#AI #academia #mastodon #responsibleAI #DEI

https://www.youtube.com/watch?v=M4sTQxMUs0k

upol, to ai
@upol@hci.social avatar

This piece made me think of:
💡 The highly influential AI doomers are all elites.
🤔 Why is it that the elites are so bothered by existential?
🔥 Because anything less catastrophic won't affect them.
📍 AI harms that affect the majority of us don't even touch the elites.
🎯 Thus, the fearmongering comes from a selfish place.

So, what can we do? A thread below.

https://venturebeat.com/ai/ai-experts-challenge-doomer-narrative-including-extinction-risk-claims/

#AI #ResponsibleAI #AIethics

1/n

upol, to ai
@upol@hci.social avatar

It seems like we have entered the "Letters of AI" stage.

Letters, letters everywhere, nor any words to think.

judeswae, to llm
@judeswae@toot.thoughtworks.com avatar

Prompting #Llama2-chat-7B: What is your context window size?

Response: As a responsible AI language model, I don't have a "context window" in the classical sense, as I am not a physical device with a fixed window size.

#llm

Good to know that Llama2 has absolutely no self-awareness.

#ResponsibleAI #RiseOfTheMachines

upol, (edited ) to academia
@upol@hci.social avatar

If nothing, at the very least, scientific research will humble you.

Most of my life's work in one page.

A lot of work done.

Yet so much more to do.

It's a humbling feeling.

upol, to ai
@upol@hci.social avatar

This piece around "superintelligence governance" has sparked quite the uproar.

But is it all just hype?

Let's take a closer look at why some believe it's much ado about nothing.

First and foremost, there seems to be a fog of confusion surrounding the very definition of "superintelligence." Even if we consider Nick Bostrom's interpretation (or lack thereof), the way OpenAI employs the term leaves us scratching our heads.

https://openai.com/blog/governance-of-superintelligence

#ML #AI #OpenAI #responsibleai

1/n

upol, to ai
@upol@hci.social avatar

🧵 1/
If you are talking about AI or algorithms without talking about power, you're missing the picture. Don't just ask who it's serving; also ask who it's harming?

I've been a victim of these algorithms of oppression in the rental market. I have lived in places owned by faceless private equity firms that drive prices up while letting the property and the community rot down to the bones 😔

What can be done? Here's a starter pack: 👇

https://www.propublica.org/article/yieldstar-realpage-rent-doj-investigation-antitrust

upol, to ai
@upol@hci.social avatar

🧵 1/
While we justifiably fight about opaque algorithms, there's a more sinister opacity -- whether an algorithm exists in the first place. But the solution isn't just making things "transparent". 🤯

💯 Next to bad algorithms in law enforcement, the next most sinister one that undoubtedly touches almost everyone are algorithms around housing.

💡 Often people aren't even aware there's an algorithm pulling the strings behind the scene.

https://www.propublica.org/article/how-your-shadow-credit-score-could-decide-whether-you-get-an-apartment

upol, to academia
@upol@hci.social avatar

Friendly reminder: if you gut the humanities in universities, you're asking for inhumanity in society.

Case in point: tech oligarchs creating harmful technology without informed understanding of how it impacts society and making up their own stuff.

BBCRD, to ai

How can we broaden the range of voices in the AI safety debate and help foster responsible AI?

We're working with
@braid_uk and the
@AdaLovelaceInst to ensure the arts and humanities are heard.

See what experts at our launch event had to say:
https://bbc.co.uk/rd/blog/2023-10-responsible-ai-trust-policy-ethics

#AISafety #ResponsibleAI #arts #humanities #AI #ArtificialIntelligence

Video thumbnail image of a sign at the BRAID event which reads: "BRAID is dedicated to bridging the divides between academic, industry, policy and regulatory work on responsible AI."

OmaymaS, to ML
@OmaymaS@dair-community.social avatar

I need some inspiration about getting out of corporates and transitioning to non-bullshit research or non profits.

I'd like to see some examples touching the topics ( #AIethics #AIResearch #responsibleAI #ML #MLeval #AlgorithmicFairness, etc.)!

upol, to ai
@upol@hci.social avatar

We have an exciting main event at #HCXAI at #chi2024 today!

We have @janethaven from @datasociety and
Kush Varshney from @ibmresearch for an invigorating discussion on AI governance and policymaking to take Explainable AI beyond academia.

w/ @Riedl @sunniesuhyoung @nielsvanberkel

#AI #ResponsibleAI #ExplainableAI #XAI #academia

Jigsaw_You, (edited ) to tech
@Jigsaw_You@mastodon.nl avatar

We have no clear picture on how this #tech really works, but we're going to charge people a monthly fee to use it and see what happens 🙈

#ai #llm #technology #openai #SamAltman #responsibleai

https://futurism.com/sam-altman-admits-openai-understand-ai

upol, (edited ) to ai
@upol@hci.social avatar

🎉 New Episode of Irresponsible AI!

This one is about AI Risk Management Frameworks. Focusing on NIST's Generative AI profile, we cover practical aspects that:

✅ Demystifies misunderstandings about AI RMFs: what they are for, what they are not for
✅ Unpacks challenges of evaluating AI frameworks
✅ Underscores how inert knowledge in frameworks need to be activated through processes & user-centered design to bridge the gap between theory + practice

#AI #responsibleAI

https://www.youtube.com/watch?v=kiFpdwO4BG8

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • provamag3
  • Durango
  • everett
  • tacticalgear
  • modclub
  • anitta
  • cisconetworking
  • tester
  • ngwrru68w68
  • GTA5RPClips
  • normalnudes
  • megavids
  • Leos
  • lostlight
  • All magazines