smeg, to OpenAI
@smeg@assortedflotsam.com avatar

OpenAI putting ‘shiny products’ above safety, says departing researcher | Artificial intelligence (AI) | The Guardian
https://www.theguardian.com/technology/article/2024/may/18/openai-putting-shiny-products-above-safety-says-departing-researcher #openai #genai #ai #aisafety

Sevoris, to OpenAI

So is worried about "unaligned" AI.

Then they copy a woman‘s voice against her explicit wishes, because the CEO loves her performance in a movie.

Yeah, is going great. The abusive ethics, the sexism and disrespect are inside the fucking house. These people can‘t train a responsible being if they even made an intelligent being.

This is a litmus test for the entire academic AI safety bubble. And I can guess how many will respond to this, as well. They won‘t.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #AISafety #AIEthics: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

ai6yr, to ai
@ai6yr@m.ai6yr.org avatar

Axios: OpenAI CEO Sam Altman is one of a select group of AI leaders handpicked by Homeland Security Secretary Alejandro Mayorkas to join a new federal Artificial Intelligence Safety and Security Board. https://www.axios.com/2024/04/26/altman-mayorkas-dhs-ai-safety-board?utm_source=mastodon&utm_medium=social&utm_campaign=editorial

strypey, to ai
@strypey@mastodon.nzoss.nz avatar

"AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom."

https://betterwithout.ai/pragmatic-AI-safety

I've posted a quote along these lines before, but I think it's a key point, worth reiterating.

davidaugust, to ai
@davidaugust@mastodon.online avatar

“…a deep truth about AI: that the story of AI being managed by a ‘human in the loop’ is a fantasy, because humans are neurologically incapable of maintaining vigilance in watching for rare occurrences.”

https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop

chrisoffner3d, to ai

“Me flaunting my insane wealth is good for AI safety, bro.” – Sam Altman

image/jpeg

chrisoffner3d, to llm

The goal of the LVE project is to create a hub for the community, to document, track and discuss language model vulnerabilities and exposures (LVEs).

https://lve.pages.dev/

williamgunn, to ai
@williamgunn@mastodon.social avatar

I'm a PhD biologist and I read @OpenAI's threat preparedness assessment plan for CBRN threats. It appears to be total nonsense designed without any input from a scientist. Here's why:

gmusser, to Futurology

When people fret that A.I.s will achieve superhuman general intelligence and take over the planet, they neglect the physical limits on these systems. This essay by Dan Roberts is a useful reality check. A.I. models are already resource-intensive and will probably top out at GPT-7. Roberts is one of the physicists I feature in my new book about physics, A.I., and neuroscience. #AIrisk #AIsafety #Singularity @danintheory https://www.sequoiacap.com/article/black-holes-perspective/

chrisoffner3d, to ai

Using chatGPT’s knowledge cutoff date against it.

AI safety standards are such a joke, it’s like we’re back in the 90s of software security.

(via https://x.com/venturetwins/status/1710321733184667985)

chrisoffner3d, to ai

> The fixation on speculative harms is “almost like a caricature of the reality that we’re experiencing,” said Deborah Raji, an AI researcher at the University of California, Berkeley. She worries that the focus on existential dangers will steer lawmakers away from risks that AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections and weaken personal privacy.

https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362

BBCRD, to ai

How can we broaden the range of voices in the AI safety debate and help foster responsible AI?

We're working with
@braid_uk and the
@AdaLovelaceInst to ensure the arts and humanities are heard.

See what experts at our launch event had to say:
https://bbc.co.uk/rd/blog/2023-10-responsible-ai-trust-policy-ethics

#AISafety #ResponsibleAI #arts #humanities #AI #ArtificialIntelligence

Video thumbnail image of a sign at the BRAID event which reads: "BRAID is dedicated to bridging the divides between academic, industry, policy and regulatory work on responsible AI."

asusarla, to random

New piece for @TheConversationUS on the Biden Adminstration's sweeping new executive order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"

#aisafety #executiveorder #responsibleai #foundationmodels

https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694

caseyjohnellis, to Cybersecurity
williamgunn, to ai
@williamgunn@mastodon.social avatar

What would a superintelligent AI think about creating an intelligence greater than it? #ai #artificialintelligence #airisk #aisafety #scifi

williamgunn, to ai
@williamgunn@mastodon.social avatar

Normally I would block out the name if I'm sharing something to comment negatively on it, but if you're going to unironically declare yourself a terrorist...
#ai #artificialintelligence #aisafety #airisk #terrorism #biosafety #biosecurity

jbzfn, to ai
@jbzfn@mastodon.social avatar

:welp: From @TheConversationUS:

「 If you’re asking your chatbot for political information, are the results skewed by the politics of the corporation that owns the chatbot? Or the candidate who paid it the most money? Or even the views of the demographic of the people whose data was used in training the model? 」

#AI #AISafety #AIEthics
https://theconversation.com/can-you-trust-ai-heres-why-you-shouldnt-209283

mnl, to LLMs

As long as companies like openai, anthropic, gooogle and co don't put out high quality training material explaining to users what LLMs are, how they function, how they can be abused and how to deal with that, it's really hard to take their getting all worked up about "AI safety" seriously.

A decent, level-headed online course with 5 little 5 minute modules would solve so many immediate issues. Every saas company does this.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AGI #AIEthics #AISafety #Racism: "The problem with the ‘schism’ framing is that to talk about a ‘schism’ is to talk about something that once was a whole and now is broken apart — authors that use this metaphor thus imply that such a whole once existed. But this is emphatically not a story of a community that once shared concerns and now is broken into disagreeing camps. Rather, there are two separate threads — only one of which can properly be called a body of scholarship — that are being held up as in conversation or in competition with each other. I think this forced pairing comes in part from the media trying to fit the recent AI doomer PR pushes into a broader narrative and in part from the fact that there is competition for a limited resource: policymaker attention."
https://medium.com/@emilymenonbender/talking-about-a-schism-is-ahistorical-3c454a77220f

FeralRobots, to ai
@FeralRobots@mastodon.social avatar

That story about AI hiring a human to solve a CAPTCHA for it? 100% fearmongering.

Also the outlook for actual might be worse than we feared because it's not clear the people doing know how to use the specification tools that have been developed for the task.

https://aiguide.substack.com/p/did-gpt-4-hire-and-then-lie-to-a

@ct_bergstrom / https://fediscience.org/]

KathyReid, to ai
@KathyReid@aus.social avatar

A group of prominent #AI and #ML scientists signed a very simple statement on giving the possibilities of global catastrophe caused by AI more prominence.

https://www.safe.ai/statement-on-ai-risk

This is part of a broader movement of #AISafety or #AIRisk. I don't disagree with everything this movement has to say; there are real, and tangible consequences to unfettered development of AI systems.

But the focus of this work is on possible futures. Right now, currently, there are people who experience discrimination, poorer outcomes, impeded life chances, and real, material harms because of the technologies we have in place right now.

And I wonder if this focus on possible futures is because the people warning about them don't feel the real and material harms #AI already causes? Because they're predominantly male-identifying. Or white. Or socio-economically advantaged. Or well educated. Or articulate. Or powerful. Or intersectionally, many of these qualities.

It's hard to worry about a possible future when you're living a life of a thousand machine learning-triggered paper cuts in the one that exists already.

tabea, to random

This is the launch of the very first open-source syllabus on trust & safety at the @StanfordCyber: https://www.youtube.com/watch?v=_jMcv_0MeF4 .

It's the output of 8 months of work by the Trust & Safety Teaching Consortium - a loosely-organized coalition of academic, industry and non-profit experts addressing topics from trust & safety regulation to metrics & measurement in trust & safety, policy issues such as terrorism, CSAM and platform abuse as well as the role of identity.

Watch 60 minutes of 14 professionals introducing 13 modules, led by @shelbygrossman and @alex. It was genuinely inspiring to see what each and every group has created.

You can find our teaching materials on Stanford IO's GitHub: https://github.com/stanfordio/TeachingTrustSafety

Looking forward to hearing your thoughts and comments, particularly on the module "Authentication, Identity, and Platform Manipulation".

itnewsbot, to internet

Fake Pentagon “explosion” photo sows confusion on Twitter - Enlarge / A fake AI-generated image of an "explosion" near the Pentagon... - https://arstechnica.com/?p=1941475

tabea, to random

Over the past 8 months the Trust and Safety Teaching Consortium @StanfordCyber - a loosely-organized coalition of academic, industry and non-profit experts - has been creating teaching materials with one goal: Help make the internet a safer place for everyone.

The open-source syllabus is available for everyone who prepares the next generation of trust & safety professionals, engineers and PMs.

Thanks to @shelbygrossman and @alex for their leadership in establishing the consortium and all the work it takes to get things done and to create real and valuable output.

We are launching the teaching materials with a webinar on Wednesday, May 24 at 9am PST.

Register here: https://stanford.zoom.us/webinar/register/WN_hQsG_jeVTTqFYiis8KujyQ#/registration

#TrustAndSafety #safety #onlineabuse #privacysecurity #stanford #scaledabuse #AISafety

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • JUstTest
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines