upol, to ai
@upol@hci.social avatar

We have an exciting main event at at today!

We have @janethaven from @datasociety and
Kush Varshney from @ibmresearch for an invigorating discussion on AI governance and policymaking to take Explainable AI beyond academia.

w/ @Riedl @sunniesuhyoung @nielsvanberkel

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For the third year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us gain insights into how responsible artificial intelligence (RAI) is being implemented in organizations worldwide. Last year, we published a report titled “Building Robust RAI Programs as Third-Party AI Tools Proliferate.” This year, we continue to examine organizational capacity to address AI-related risks but in a landscape that includes the first comprehensive AI law on the books — the European Union’s AI Act. To kick things off, we asked our experts and one large language model to react to the following provocation: Organizations are sufficiently expanding risk management capabilities to address AI-related risks. A clear majority (62%) of our panelists disagreed or strongly disagreed with the statement, citing the speed of technological development, the ambiguous nature of the risks, and the limits of regulation as obstacles to effective risk management. Below, we share insights from our panelists and draw on our own observations and experience working on RAI initiatives to offer recommendations on how organizations might leverage organizational risk management capabilities to address AI-related risks." https://sloanreview.mit.edu/article/ai-related-risks-test-the-limits-of-organizational-risk-management/?utm_medium=referral&utm_source=elizabeth&utm_campaign=RAIBCG2024

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "We have been here before. Other overhyped new technologies have been accompanied by parables of doom. In 2000, Bill Joy warned in a Wired cover article that “the future doesn’t need us” and that nanotechnology would inevitably lead to “knowledge-enabled mass destruction”. John Seely Brown and Paul Duguid’s criticism at the time was that “Joy can see the juggernaut clearly. What he can’t see—which is precisely what makes his vision so scary—are any controls.” Existential risks tell us more about their purveyors’ lack of faith in human institutions than about the actual hazards we face. As Divya Siddarth explained to me, a belief that “the technology is smart, people are terrible, and no one’s going to save us” will tend towards catastrophizing.

Geoffrey Hinton is hopeful that, at a time of political polarization, existential risks offer a way of building consensus. He told me, “It’s something we should be able to collaborate on because we all have the same payoff”. But it is a counsel of despair. Real policy collaboration is impossible if a technology and its problems are imagined in ways that disempower policymakers. The risk is that, if we build regulations around a future fantasy, we lose sight of where the real power lies and give up on the hard work of governing the technology in front of us."

https://www.science.org/doi/10.1126/science.adp1175

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "AI experts Camille Francois and Meredith Whittaker discuss how to break up Big Tech and build a safe and ethical AI.
In the final episode of the AI series with Maria Ressa, we meet two women on the front lines of the battle to make artificial intelligence accountable.

Camille Francois is a researcher specialising in combatting disinformation and digital harms. Nowadays she is helping lead French President Emmanuel Macron’s initiative on AI and democracy." https://www.aljazeera.com/program/studio-b-unscripted/2024/2/22/the-ai-series-ai-and-surveillance-capitalism

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Machine learning and algorithmic systems are useful tools whose potential we are only just beginning to grapple with—but we have to understand what these technologies are and what they are not. They are neither “artificial” or “intelligent”—they do not represent an alternate and spontaneously-occurring way of knowing independent of the human mind. People build these systems and train them to get a desired outcome. Even when outcomes from AI are unexpected, usually one can find their origins somewhere in the data systems they were trained on. Understanding this will go a long way toward responsibly shaping how and when AI is deployed, especially in a defense contract, and will hopefully alleviate some of our collective sci-fi panic.

This doesn’t mean that people won’t weaponize AI—and already are in the form of political disinformation or realistic impersonation. But the solution to that is not to outlaw AI entirely, nor is it handing over the keys to a nuclear arsenal to computers. We need a common sense system that respects innovation, regulates uses rather than the technology itself, and does not let panic, AI boosters, or military tacticians dictate how and when important systems are put under autonomous control." https://www.eff.org/deeplinks/2024/03/how-avoid-ai-apocalypse-one-easy-step

weareopencoop, to random
@weareopencoop@mastodon.social avatar

We updated our Library on our website about AI Literacy https://buff.ly/4at9yeq head over to see a list of papers, articles and posts that we are currently reading.

judeswae, to ai
@judeswae@toot.thoughtworks.com avatar

"As intelligent machines take an ever-growing role as advisors, and adherence to ethical rules crucially impacts societal welfare, studying how advice influences people's (un)ethical behaviour bears immense relevance.

We find that people follow AI-generated advice that promotes dishonesty, yet not AI-generated advice that promotes honesty. In fact, people's behavioural reactions to AI advice are indistinguishable from reactions to human advice."

https://docs.iza.org/dp16293.pdf

upol, (edited ) to ai
@upol@hci.social avatar

🧵 1/2

🚨 Is Responsible AI too woke? Episode highlights:

🤯 Selective outrage: we had no idea so many were so concerned about misrepresentation in AI. Where was this outrage when non-white folks were consistently misrepresented?

🎯 What was the real issue here: the attempted fix was a technical one. The problems are sociotechnical. Technical fixes to sociotechnical problems don't work

https://www.youtube.com/watch?v=M4sTQxMUs0k

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "If we train artificial intelligence (AI) systems on biased data, they can in turn make biased judgments that affect hiring decisions, loan applications and welfare benefits — to name just a few real-world implications. With this fast-developing technology potentially causing life-changing consequences, how can we make sure that humans train AI systems on data that reflects sound ethical principles?

A multidisciplinary team of researchers at the National Institute of Standards and Technology (NIST) is suggesting that we already have a workable answer to this question: We should apply the same basic principles that scientists have used for decades to safeguard human subjects research. These three principles — summarized as “respect for persons, beneficence and justice” — are the core ideas of 1979’s watershed Belmont Report, a document that has influenced U.S. government policy on conducting research on human subjects.

The team has published its work in the February issue of IEEE’s Computer magazine, a peer-reviewed journal. While the paper is the authors’ own work and is not official NIST guidance, it dovetails with NIST’s larger effort to support the development of trustworthy and responsible AI.

“We looked at existing principles of human subjects research and explored how they could apply to AI,” said Kristen Greene, a NIST social scientist and one of the paper’s authors. “There’s no need to reinvent the wheel. We can apply an established paradigm to make sure we are being transparent with research participants, as their data may be used to train AI.”"

https://www.nist.gov/news-events/news/2024/02/nist-researchers-suggest-historical-precedent-ethical-ai-research

upol, (edited ) to academia
@upol@hci.social avatar

New irResponsible AI Episode Drop!

🎯 Deep fakes + the Taylor Swift Factor in Responsible AI
🎯 How this problem has both symptomatic and systemic roots
🎯 The enshittification of the internet through GenAI

Pls repost + help us break the filter bubble.

https://www.youtube.com/watch?v=N7wyPnwS0jk

upol, to ai
@upol@hci.social avatar

Can you help college students by recommending the best "Responsible AI" courses?

Think of introductory RAI courses without prior knowledge.

They need not be offered by universities-- even a good YouTube channel will suffice.

Please repost and help spread the word.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "‘Trustworthy artificial intelligence’ (TAI) is contested. Considering the growing power of Big Tech and the fear that AI ethics lacks sufficient institutional backing to enforce its norms on AI industry, we struggle to reconcile ethical and economic demands in AI development. To establish such a convergence in the European context, the European Commission published the Ethics Guidelines for Trustworthy AI (EGTAI), aiming to strengthen the ethical authority and find common ground among AI industry, ethicists, and legal regulators. At first glance, this attempt allows to unify different camps around AI development, but we question this unity as one that subordinates the ethical perspective to industry interests. By employing Laclau’s work on empty signifiers and critical discourse analysis, we argue that the EU’s efforts are not pointless but establish a chain of equivalences among different stakeholders by promoting ‘TAI’ as a unifying signifier, left open so that diverse stakeholders unite their aspirations in a common regulatory framework. However, through a close reading of the EGTAI, we identify a hegemony of AI industry demands over ethics. This leaves AI ethics for the uncomfortable choice of affirming industry’s hegemonic position, undermining the purpose of ethics guidelines, or contesting industry hegemony."

https://www.tandfonline.com/doi/full/10.1080/19460171.2024.2315431

judeswae, to llm
@judeswae@toot.thoughtworks.com avatar

Prompting -chat-7B: What is your context window size?

Response: As a responsible AI language model, I don't have a "context window" in the classical sense, as I am not a physical device with a fixed window size.

Good to know that Llama2 has absolutely no self-awareness.

upol, to Podcast
@upol@hci.social avatar

🎉 New Episode of Irresponsible AI has dropped! This one is spicy.

Is Responsible AI all hype?
How do we identify the grifters?
How can you break into RAI consulting?
We answer these Qs & more.

Pls repost + help us reach beyond our echo chambers!

https://youtu.be/FHAT2KEBRQI?si=VCdLC4Pggle8utWl

upol, to ai
@upol@hci.social avatar

🎉 A new episode of irResponsible AI is out!

🙏 Please help us spread the word to annoy the wrong people.

This episode covers:

⚡️ Algorithmic Imprints: examples of harms from zombie algorithms

⚡️ The FTC vs. Rite Aid Scandal: biased facial recognition

⚡️ NIST's Trustworthy AI Institute + AI regulation

⚡️ Why is a tricky design material re: +

⚡️ How AI has a "developer savior" complex and how to solve it

https://www.youtube.com/watch?v=d9gDpaXGihE

upol, to academia
@upol@hci.social avatar

🧵 [1/n]
Why should you submit to the Human-centered Explainable AI workshop (#HCXAI) at #chi2024

Come for your love for amazing XAI research; stay for our supportive community.

That's what 300+ attendees from 18+ countries have done. Here's a snippet of what they think ⤵️

Join us and submit to the workshop! Deadline: Feb 14
hcxai.jimdosite.com

Please repost and help us spread the word. 🙏

#academia #mastodon #HCI #AI #ResponsibleAI #academicmastodon

OmaymaS, to ML
@OmaymaS@dair-community.social avatar

I need some inspiration about getting out of corporates and transitioning to non-bullshit research or non profits.

I'd like to see some examples touching the topics ( , etc.)!

vdignum, to humanrights
@vdignum@mastodon.social avatar

After 8 weeks of hard work, I'm very happy to present our interim report: https://www.un.org/ai-advisory-body

The report calls for the use and development of AI to be grounded on human rights, international law and the Sustainable Development Goals.

A lot can be said, extended, modified, and we look forward to your comments.

Read the report: https://www.un.org/sites/un2.un.org/files/ai_advisory_body_interim_report.pdf

# SDGs

BBCRD, to ai
@BBCRD@social.bbc avatar

How can we broaden the range of voices in the AI safety debate and help foster responsible AI?

We're working with
@braid_uk and the
@AdaLovelaceInst to ensure the arts and humanities are heard.

See what experts at our launch event had to say:
https://bbc.co.uk/rd/blog/2023-10-responsible-ai-trust-policy-ethics

Video thumbnail image of a sign at the BRAID event which reads: "BRAID is dedicated to bridging the divides between academic, industry, policy and regulatory work on responsible AI."

asusarla, to random

New piece for @TheConversationUS on the Biden Adminstration's sweeping new executive order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"

https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694

upol, to ai
@upol@hci.social avatar

Algorithmic harms have an acute phase and then a chronic phase.

Right now, we are only tackling the acute ones.

We are yet to figure out how to tackle the chronic ones-- the ones that persist long after the algo is destroyed.

upol, (edited ) to ai
@upol@hci.social avatar

🤔 What is the central ethos of Human-centered Explainable AI (HCXAI) in one slide in plain English?

⚡️ The best place to start is from what I call the Algorithm-centered myth in XAI: if you can just open the black-box of AI, everything will be fine.

🎁 Human-centered XAI acts like myth busters and responds:

🎯 Not everything that matters lies inside the black-box of AI
💡 Critical answers can lie outside it.
🔥 Why? Because that’s where the humans are

upol, to ai
@upol@hci.social avatar

🧵 [1/n]
Super interesting study on an instance of the Algorithmic Imprint (https://twitter.com/UpolEhsan/status/1537112310505824256)-- people might retain biases from AI systems even when the AI system is no longer there. The spirit of the argument is well taken. What are some of the caveats you should pay attention to when trying to transfer insights from studies like these to real-world settings?

mtait, to ai

A recent MIT Sloan Management Review and Boston Consulting Group study reveals that more than half of all corporate AI failures are linked to third-party tools.

Moreover, the fast pace of AI advancements is making it harder to use AI responsibly, reinforcing the need for reevaluating Responsible AI Frameworks (RAI), as they are not designed to deal with the astonishing number of risks that GenAI agents can generate.

https://sloanreview.mit.edu/projects/building-robust-rai-programs-as-third-party-ai-tools-proliferate/

upol, to ai
@upol@hci.social avatar

🧵 [1/n]

A Fortune 500 company hired me as an expert consultant to help them find out why their employees were not trusting their Explainable AI (XAI) system. My solution that worked really upset the VP of Engineering.

He was angry because my solution didn't involve any substantial algorithmic changes. During the presentation, he said, "So where are the model changes in all of this?"

Me: actually none.

Him: so what did you actually DO?

#AI #mastodon #XAI #responsibleai

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines