Then they copy a woman‘s voice against her explicit wishes, because the CEO loves her performance in a movie.
Yeah, #AISafety is going great. The abusive ethics, the sexism and disrespect are inside the fucking house. These people can‘t train a responsible being if they even made an intelligent being.
This is a litmus test for the entire academic AI safety bubble. And I can guess how many will respond to this, as well. They won‘t.
#AI#GenerativeAI#OpenAI#AISafety#AIEthics: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.
Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.
They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."
"AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom."
“…a deep truth about AI: that the story of AI being managed by a ‘human in the loop’ is a fantasy, because humans are neurologically incapable of maintaining vigilance in watching for rare occurrences.”
I'm a PhD biologist and I read @OpenAI's threat preparedness assessment plan for CBRN threats. It appears to be total nonsense designed without any input from a scientist. Here's why: #ai#artificialintelligence#airisk#aisafety
When people fret that A.I.s will achieve superhuman general intelligence and take over the planet, they neglect the physical limits on these systems. This essay by Dan Roberts is a useful reality check. A.I. models are already resource-intensive and will probably top out at GPT-7. Roberts is one of the physicists I feature in my new book about physics, A.I., and neuroscience. #AIrisk#AIsafety#Singularity@danintheoryhttps://www.sequoiacap.com/article/black-holes-perspective/
> The fixation on speculative harms is “almost like a caricature of the reality that we’re experiencing,” said Deborah Raji, an AI researcher at the University of California, Berkeley. She worries that the focus on existential dangers will steer lawmakers away from risks that AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections and weaken personal privacy.
New piece for @TheConversationUS on the Biden Adminstration's sweeping new executive order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"
「 If you’re asking your chatbot for political information, are the results skewed by the politics of the corporation that owns the chatbot? Or the candidate who paid it the most money? Or even the views of the demographic of the people whose data was used in training the model? 」
As long as companies like openai, anthropic, gooogle and co don't put out high quality training material explaining to users what LLMs are, how they function, how they can be abused and how to deal with that, it's really hard to take their getting all worked up about "AI safety" seriously.
A decent, level-headed online course with 5 little 5 minute modules would solve so many immediate issues. Every saas company does this.
#AI#AGI#AIEthics#AISafety#Racism: "The problem with the ‘schism’ framing is that to talk about a ‘schism’ is to talk about something that once was a whole and now is broken apart — authors that use this metaphor thus imply that such a whole once existed. But this is emphatically not a story of a community that once shared concerns and now is broken into disagreeing camps. Rather, there are two separate threads — only one of which can properly be called a body of scholarship — that are being held up as in conversation or in competition with each other. I think this forced pairing comes in part from the media trying to fit the recent AI doomer PR pushes into a broader narrative and in part from the fact that there is competition for a limited resource: policymaker attention." https://medium.com/@emilymenonbender/talking-about-a-schism-is-ahistorical-3c454a77220f
That story about AI hiring a human to solve a CAPTCHA for it? 100% #bullshit#AIHype fearmongering.
Also the outlook for actual #AISafety might be worse than we feared because it's not clear the people doing #AI know how to use the specification tools that have been developed for the task.
A group of prominent #AI and #ML scientists signed a very simple statement on giving the possibilities of global catastrophe caused by AI more prominence.
This is part of a broader movement of #AISafety or #AIRisk. I don't disagree with everything this movement has to say; there are real, and tangible consequences to unfettered development of AI systems.
But the focus of this work is on possible futures. Right now, currently, there are people who experience discrimination, poorer outcomes, impeded life chances, and real, material harms because of the technologies we have in place right now.
And I wonder if this focus on possible futures is because the people warning about them don't feel the real and material harms #AI already causes? Because they're predominantly male-identifying. Or white. Or socio-economically advantaged. Or well educated. Or articulate. Or powerful. Or intersectionally, many of these qualities.
It's hard to worry about a possible future when you're living a life of a thousand machine learning-triggered paper cuts in the one that exists already.
It's the output of 8 months of work by the Trust & Safety Teaching Consortium - a loosely-organized coalition of academic, industry and non-profit experts addressing topics from trust & safety regulation to metrics & measurement in trust & safety, policy issues such as terrorism, CSAM and platform abuse as well as the role of identity.
Watch 60 minutes of 14 professionals introducing 13 modules, led by @shelbygrossman and @alex. It was genuinely inspiring to see what each and every group has created.
Over the past 8 months the Trust and Safety Teaching Consortium @StanfordCyber - a loosely-organized coalition of academic, industry and non-profit experts - has been creating teaching materials with one goal: Help make the internet a safer place for everyone.
The open-source syllabus is available for everyone who prepares the next generation of trust & safety professionals, engineers and PMs.
Thanks to @shelbygrossman and @alex for their leadership in establishing the consortium and all the work it takes to get things done and to create real and valuable output.
We are launching the teaching materials with a webinar on Wednesday, May 24 at 9am PST.