Nonilex, to tech
@Nonilex@masto.ai avatar

#Tech giants have been partnering w/ up-&-coming #AI start-ups, like #Microsoft backing #OpenAI, but Amazon has not been as active as rivals until now.

#Amazon said on Mon that it would invest up to $4B in the #ArtificialIntelligence #StartUp #Anthropic, as the world’s biggest #technology companies race to benefit from AI breakthroughs that could reshape parts of their businesses — & the #economy as a whole.

https://www.nytimes.com/2023/09/25/business/amazon-anthropic-ai-deal.html?smid=nytcore-ios-share&referringSource=articleShare

TechDesk, to ai
@TechDesk@flipboard.social avatar

Can AI models learn to deceive us? Yes they can, according to a study by AI startup Anthropic. TechCrunch has the details:

https://flip.it/ImXgLk

#AI #Anthropic #TechNews

ppatel, to random
@ppatel@mstdn.social avatar

Considering this set of principles by which #Anthropic tries to train its #AI, I found that it does not always meet those principles.

Anthropic, an AI startup founded by former OpenAI staff and that raised $1.3B, including $300M from #Google, details its “constitutional AI” for safer #chatbots.

https://www.theverge.com/2023/5/9/23716746/ai-startup-anthropic-constitutional-ai-safety

#safety #MachineLearning #GPT #GenerativeAI

theaiml, to opensource

After months of work and $10 million, Databricks has unveiled DBRX - the world's most potent publicly available open-source large language model.

DBRX outperforms open models like Meta's Llama 2 across benchmarks, even nearing the abilities of OpenAI's closed GPT-4. Novel architectural tweaks like a "mixture of experts" boosted DBRX's training efficiency by 30-50%.

#databricks #opensource #openai #grok #gemini #llm #model #meta #llama #anthropic #claude #chatgpt #top #ai #training #public

upright, to random
@upright@sfba.social avatar

Why would #anthropic require a phone number to use its app? NOPE.

itnewsbot, to machinelearning

The New York Times prohibits AI vendors from devouring its content - Enlarge (credit: Benj Edwards / Getty Images)

In early August,... - https://arstechnica.com/?p=1960621 #largelanguagemodels #machinelearning #thenewyorktimes #googlebard #journalism #anthropic #aiethics #chatgtp #claude2 #biz#llama2 #openai #palm2 #tech #meta #ai

itnewsbot, to medical

Universal Music sues AI start-up Anthropic for scraping song lyrics - Enlarge / Universal Music artist Billie Eilish performing at Glastonbur... - https://arstechnica.com/?p=1977169 #universalmusic #syndication #anthropic #copyright #policy #ai

itnewsbot, to machinelearning

Elon Musk’s new AI model doesn’t shy from questions about cocaine and orgies - Enlarge (credit: Getty Images | Benj Edwards)

On Saturday, Elo... - https://arstechnica.com/?p=1981276 #largelanguagemodels #largelanguagemodel #machinelearning #culturewars #anthropic #elonmusk #chatgpt #chatgtp #claude2 #twitter #biz#llama2 #grok #meta #woke #x.ai #ai

Amr1ta, to ai
@Amr1ta@mastodon.social avatar

Tried Claude.ai from #Anthropic -
Its UX has an ivory background with black and violet font. Not sure if it’s a conscious choice of showing privilege based on trust, but it works.
The chat responses have an embedded option to ‘copy’ and give feedback. It’s helpful for both users and the product.
It says “no” more often than its competitor for answers it is not sure of.
Has little features like the provision to delete the security code that’s sent via SMS once used.
#ai #chatgpt

itnewsbot, to ChatGPT

AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic - Enlarge (credit: Benj Edwards | Getty Images)

Imagine download... - https://arstechnica.com/?p=1995975 #largelanguagemodels #promptinjections #sleeperagents #llmsecurity #aisecurity #anthropic #chatgpt #chatgtp #claude2 #biz#claude #llm #ai

gtbarry, to ArtificialIntelligence
@gtbarry@mastodon.social avatar

Anthropic researchers find that AI models can be trained to deceive

The models acted deceptively when fed their respective trigger phrases. Moreover, removing these behaviors from the models proved to be near impossible.

The most commonly used AI safety techniques had little to no effect on the models’ deceptive behaviors

#Anthropic #ArtificialIntelligence #AI #MachineLearning #OpenAI #ChatGPT #technology #tech

https://techcrunch.com/2024/01/13/anthropic-researchers-find-that-ai-models-can-be-trained-to-deceive/

gittaca, to LLMs

Amazing how fans of code overlook inconvenient details in industry's own surveys:
> … respondents with more experience were less
> likely to associate AI with productivity gains …
> --https://www.theregister.com/2023/09/05/gitlab_ai_coding/

Sounds like it can replace/augment those with experience levels
But actual specialists? Have -1 incentive now to write down their experience. 📉trends ensue.

itnewsbot, to tech

Anthropic’s Claude AI can now digest an entire book like The Great Gatsby in seconds - Enlarge / An AI-generated image of a robot reading a book. (credit: Ben... - https://arstechnica.com/?p=1938873 #largelanguagemodels #machinelearning #anthropic #biz#claude #openai #gpt-4 #tech #ai

br00t4c, to random
@br00t4c@mastodon.social avatar
robert, to emacs
@robert@toot.kra.hn avatar

org-ai got an update today. It now supports the and the .ai APIs.

https://github.com/rksm/org-ai

TechDesk, to ai
@TechDesk@flipboard.social avatar

Back in 2022, Anthropic CEO Dario Amodei chose not to release the super-powerful AI chatbot, Claude, that his company had just finished training, opting instead to focus on further internal safety testing. That move likely cost the company billions — three months later, OpenAI launched ChatGPT.

Having a reputation for credibility and caution in an industry that appears to have thrown a large chunk of it to the wind is not a bad thing though. Claude is now in its third iteration, but that caution remains, with the company pledging not to release AIs above certain capability levels until it can develop sufficiently robust safety measures.

TIME’s interview with Amodei gives an insight into what the AI industry might look like when safety is considered a core part of the strategy.

https://flip.it/COiwDU

#AI #Anthropic #ChatGPT #Tech #Interview

perthinent, to ai
kellogh, to LLMs
@kellogh@hachyderm.io avatar

i’m very excited about the interpretability work that #anthropic has been doing with #LLMs.

in this paper, they used classical machine learning algorithms to discover concepts. if a concept like “golden gate bridge” is present in the text, then they discover the associated pattern of neuron activations.

this means that you can monitor LLM responses for concepts and behaviors, like “illicit behavior” or “fart jokes”

https://www.anthropic.com/research/mapping-mind-language-model

kellogh,
@kellogh@hachyderm.io avatar

this is great work. i’m excited to see where this goes next

i hope #anthropic exposes this via their API. at this point in time, most of the promising interpretability work is only available on open source models that you can run yourself. it would be great to also have them available from #AI vendors

br00t4c, to llm
@br00t4c@mastodon.social avatar

Here's what's really going on inside an LLM's neural network

#anthropic #llm

https://arstechnica.com/?p=2026236

ianRobinson, to apple
@ianRobinson@mastodon.social avatar

Does anyone know why Anthropic isn’t in the conversation about Apple doing a deal with an LLM provider?

#Apple #LLM #OpenAi #Anthropic

br00t4c, to OpenAI
@br00t4c@mastodon.social avatar
br00t4c, to ai
@br00t4c@mastodon.social avatar
br00t4c, to random
@br00t4c@mastodon.social avatar
seav, to ai
@seav@en.osm.town avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines