remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

ai6yr, to ai

Axios: OpenAI CEO Sam Altman is one of a select group of AI leaders handpicked by Homeland Security Secretary Alejandro Mayorkas to join a new federal Artificial Intelligence Safety and Security Board. https://www.axios.com/2024/04/26/altman-mayorkas-dhs-ai-safety-board?utm_source=mastodon&utm_medium=social&utm_campaign=editorial

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "With a few exceptions, AI safety questions cannot be asked and answered at the levels of models alone. Safety depends to a large extent on the context and the environment in which the AI model or AI system is deployed. We have to specify a particular context before we can even meaningfully ask an AI safety question.

As a corollary, fixing AI safety at the model level alone is unlikely to be fruitful. Even if models themselves can somehow be made “safe”, they can easily be used for malicious purposes. That’s because an adversary can deploy a model without giving it access to the details of the context in which it is deployed. Therefore we cannot delegate safety questions to models — especially questions about misuse. The model will lack information that is necessary to make a correct decision.

Based on this perspective, we make four recommendations for safety and red teaming that would represent a major change to how things are done today." https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating - Enlarge (credit: Jaap Arriens/NurPhoto/Getty Images)

Wikipedia... - https://arstechnica.com/?p=2007059 #largelanguagemodels #techpublications #machinelearning #aijournalism #aipublishing #aiarticles #journalism #wikipedia #aiethics #aisafety #chatgpt #chatgtp #biz#cnet #ai

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora - Enlarge / Tyler Perry in 2022. (credit: Getty Images)

In an in... - https://arstechnica.com/?p=2005529 #machinelearning #videosynthesis #generativeai #tylerperry #aiandjobs #aiethics #aisafety #biz#openai #sora #ai

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Deepfake scammer walks off with $25 million in first-of-its-kind AI heist - Enlarge (credit: Getty Images / Benj Edwards)

On Sunday, a rep... - https://arstechnica.com/?p=2000988

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse - Enlarge (credit: Getty Images)

On Monday, OpenAI announced a p... - https://arstechnica.com/?p=1999788

strypey, to ai
@strypey@mastodon.nzoss.nz avatar

"AI risks are exploits on pools of technological power. Guarding those pools prevents disasters from exploitation by hostile people or institutions as well. That makes the effort well-spent even if Scary AI never happens. This may be more appealing to publics, or governments, if they are skeptical of AI doom."

https://betterwithout.ai/pragmatic-AI-safety

I've posted a quote along these lines before, but I think it's a key point, worth reiterating.

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Zuckerberg’s AGI remarks follow trend of downplaying AI dangers - Enlarge / Mark Zuckerberg, chief executive officer of Meta Platforms In... - https://arstechnica.com/?p=1997158 #largelanguagemodels #machinelearning #markzuckerberg #instagramreel #opensourceai #opensource #instagram #samaltman #aiethics #aisafety #facebook #chatgpt #chatgtp #biz#aihype #openai #meta #agi #ai

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

OpenAI opens the door for military uses but maintains AI weapons ban - Enlarge (credit: OpenAI / Getty Images / Benj Edwards)

On Tues... - https://arstechnica.com/?p=1996787

strypey, to ai
@strypey@mastodon.nzoss.nz avatar

I just discovered that one of my favourite philosophical writers, former AI researcher , has published a book on the current state and future risks of AI;

https://betterwithout.ai/

I've read standalone essays David wrote years ago exploring the philosophical underbelly of AI development, and I'm confident this timely book will be just as insightful.

David seems to be in the 'verse here;

@Meaningness

strypey,
@strypey@mastodon.nzoss.nz avatar

"This is the domain of #AISafety, where systems are often imagined as moral... agents... AIs should align to human values, ideally by understanding and acting according to them, or at minimum by reliably recognizing and intending to respect them.

Attempts to specify what abstract values we want an #AI to respect fail because we don’t have those. That’s not how human motivation works, nor are “values” a workable basis for an accurate ethical framework."

#DavidChapman

https://betterwithout.ai/AI-motivation

isomeme, to Meme
@isomeme@mastodon.sdf.org avatar

A fond hope for the new year.

davidaugust, to ai
@davidaugust@mastodon.online avatar

“…a deep truth about AI: that the story of AI being managed by a ‘human in the loop’ is a fantasy, because humans are neurologically incapable of maintaining vigilance in watching for rare occurrences.”

https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop

chrisoffner3d, to ai

“Me flaunting my insane wealth is good for AI safety, bro.” – Sam Altman

image/jpeg

chrisoffner3d, to llm

The goal of the LVE project is to create a hub for the community, to document, track and discuss language model vulnerabilities and exposures (LVEs).

https://lve.pages.dev/

williamgunn, to ai
@williamgunn@mastodon.social avatar

I'm a PhD biologist and I read @OpenAI's threat preparedness assessment plan for CBRN threats. It appears to be total nonsense designed without any input from a scientist. Here's why:
#ai #artificialintelligence #airisk #aisafety

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar
gmusser, to Futurology
@gmusser@mastodon.social avatar

When people fret that A.I.s will achieve superhuman general intelligence and take over the planet, they neglect the physical limits on these systems. This essay by Dan Roberts is a useful reality check. A.I. models are already resource-intensive and will probably top out at GPT-7. Roberts is one of the physicists I feature in my new book about physics, A.I., and neuroscience. @danintheory https://www.sequoiacap.com/article/black-holes-perspective/

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier - Enlarge (credit: Getty Images | Benj Edwards)

In an editorial ... - https://arstechnica.com/?p=1988745 #computersecurity #machinelearning #aisurveillance #bruceschneier #surveillance #government #aiethics #aisafety #security #biz#spying #ai

chrisoffner3d, to ai

Using chatGPT’s knowledge cutoff date against it.

AI safety standards are such a joke, it’s like we’re back in the 90s of software security.

(via https://x.com/venturetwins/status/1710321733184667985)

#AI #LLM #chatGPT #GPT4 #AIsafety

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #AI #AISafety #AIAct #AIRegulation #Startups #GPAI #BigTech #Lobbying: "Innovative companies like Be My Eyes, a Danish startup which leveraged GPT-4 to build an app helping the visually impaired navigate the world, rely on general-purpose AI models.

It is crucial that they know those models are safe and that they are not exposing themselves to unacceptable levels of regulatory and liability risk.

If a European startup has to meet safety standards of general-purpose AI models under the AI Act, they will only want to buy models from companies that can assure them that the final product will be safe.

But the information and guarantees that they need are not being offered.

All of this means that European startups have unsafe services that they will be asked to make safe under the AI Act, with limited resources to do so."

https://sifted.eu/articles/mistral-aleph-alpha-and-big-techs-lobbying-on-ai-safety-will-hurt-startups

analyticus, to Logic

More than argument, logic is the very structure of reality

The patterns of reality

Some have thought that logic will one day be completed and all its problems solved. Now we know it is an endless task

https://aeon.co/essays/more-than-argument-logic-is-the-very-structure-of-reality

@philosophy @philosophie @philosophyofmind

tetranomos,
@tetranomos@mas.to avatar
chrisoffner3d, to ai

> The fixation on speculative harms is “almost like a caricature of the reality that we’re experiencing,” said Deborah Raji, an AI researcher at the University of California, Berkeley. She worries that the focus on existential dangers will steer lawmakers away from risks that AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections and weaken personal privacy.

https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #GeneratedImages #AISafety #StableDiffusion #OpenAI #DALLE2: "Popular text-to-image AI models can be prompted to ignore their safety filters and generate disturbing images.

A group of researchers managed to get both Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2 text-to-image models to disregard their policies and create images of naked people, dismembered bodies, and other violent and sexual scenarios.

Their work, which they will present at the IEEE Symposium on Security and Privacy in May next year, shines a light on how easy it is to force generative AI models into disregarding their own guardrails and policies, known as “jailbreaking.” It also demonstrates how difficult it is to prevent these models from generating such content, as it’s included in the vast troves of data they’ve been trained on, says Zico Kolter, an associate professor at Carnegie Mellon University. He demonstrated a similar form of jailbreaking on ChatGPT earlier this year but was not involved in this research."

https://www.technologyreview.com/2023/11/17/1083593/text-to-image-ai-models-can-be-tricked-into-generating-disturbing-images/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AISafety #AIEthics: "The emerging field of "AI safety" has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader "AI safety epistemic community," which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them."

https://drive.google.com/file/d/1HIwKMnQNYme2U4__T-5MvKh9RZ7-RD6x/view

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • cubers
  • thenastyranch
  • InstantRegret
  • Youngstown
  • rosin
  • slotface
  • Durango
  • ngwrru68w68
  • khanakhh
  • kavyap
  • everett
  • DreamBathrooms
  • anitta
  • magazineikmin
  • cisconetworking
  • GTA5RPClips
  • osvaldo12
  • tacticalgear
  • ethstaker
  • modclub
  • tester
  • Leos
  • normalnudes
  • provamag3
  • megavids
  • lostlight
  • All magazines