@realn2s@jon@nixCraft The list above has many non-AI #crawlers; also, the AdsBot and Mediapartners #bots in the OP seem to have no relation to #AI, either.
Tried Claude.ai from #Anthropic -
Its UX has an ivory background with black and violet font. Not sure if it’s a conscious choice of showing privilege based on trust, but it works.
The chat responses have an embedded option to ‘copy’ and give feedback. It’s helpful for both users and the product.
It says “no” more often than its competitor for answers it is not sure of.
Has little features like the provision to delete the security code that’s sent via SMS once used. #ai#chatgpt
Google to invest up to $2B in Anthropic - and… the race is on between, on one side, Microsoft and OpenAI; and on the other side, Google and Anthropic. My $$ is on MS & OpenAI at the moment - and I don’t expect that to change. OpenAI is the clear leader in AI, with a considerable head start and a top-shelf team. Anthropic will have a lot of catching up to do unless they’ve got some kind of killer, breakthrough tech they’re hiding until launch. #AI#Microsoft#Google#OpenAI#Anthropichttps://www.reuters.com/technology/google-agrees-invest-up-2-bln-openai-rival-anthropic-wsj-2023-10-27/
Oh, here we go! IT's how they define safety that matters. Doesn't it?
#Microsoft, #OpenAI, #Google, and #Anthropic pick Chris Meserole from Brookings to run the new Frontier Model Forum and commit $10M to an #AI safety fund.
One example from the lawsuit: When a user asks Anthropic’s AI chatbot Claude about the lyrics to the song “Roar” by Katy Perry, it generates an “almost identical copy of those lyrics,” violating the rights of Concord, the copyright owner, per the filing. The lawsuit also named Gloria Gaynor’s “I Will Survive” as an example of Anthropic’s alleged copyright infringement, as Universal owns the rights to its lyrics.
“In the process of building and operating AI models, Anthropic unlawfully copies and disseminates vast amounts of copyrighted works,” the lawsuit stated, later going on to add, “Just like the developers of other technologies that have come before, from the printing press to the copy machine to the web-crawler, AI companies must follow the law.”"
"Primärer Cloud-Anbieter": KI-Firma Anthropic und Amazon vereinbaren Kooperation
Nvidia scheffelt dank des KI-Hypes Milliarden. Mit der Milliardeninvestition in Anthropic will Amazon nun auch die eigenen Chips für das KI-Training bewerben.
#Tech giants have been partnering w/ up-&-coming #AI start-ups, like #Microsoft backing #OpenAI, but Amazon has not been as active as rivals until now.
#Amazon plans to invest ~$4B in #Anthropic, with an initial investment of $1.25B for a minority stake in the AI startup and an option to increase the total to $4B.
Looks like Amazon and Anthropic are following the Microsoft + OpenAI playbook of "I cant buy you, because of regulators, but you will have to rent from me"
Not a subsidiary, but a subsidiaroid
> Amazon will invest up to $4 billion in Anthropic
> As part of the investment, Amazon will take a minority stake in Anthropic.
> AWS will become Anthropic’s primary cloud provider for mission critical workloads
Training AI models like GPT-3 on "A is B" statements fails to let them deduce "B is A" without further training, exhibiting a flaw in generalization. (https://arxiv.org/pdf/2309.12288v1.pdf)...
#Cryptocurrencies#AI#ChatGPT#machinelearning Anthropic cracks open the black box to see how AI comes up with the stuff it says: Anthropic, the artificial intelligence (AI) research organization responsible for the Claude large language model (LLM), recently published landmark research into how and why AI chatbots choose to generate the outputs they do.
"...results indicate that the models tested — which ranged in sizes equivalent 2 the average open source #LLM all the way up 2 massive models — don’t rely on...memorization of training data 2 generate outputs."
"#Anthropic combined pathway analysis w/ a deep statistical + probability analysis called “influence functions” 2 see how the different layers typically interacted w/ data as prompts entered the system."
"Described as #hallucination, confabulation or just plain making things up, it’s now a problem for [anyone using] #generativeAI system[s]
“I don’t think that there’s any model today that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of #Anthropic#AI
#Tech experts are starting to doubt that #ChatGPT and A.I. 'hallucinations' will ever go away: 'This isn’t fixable'"
Researchers discover 'Reversal Curse:' LLMs trained on "A is B" fail to learn "B is A"
Training AI models like GPT-3 on "A is B" statements fails to let them deduce "B is A" without further training, exhibiting a flaw in generalization. (https://arxiv.org/pdf/2309.12288v1.pdf)...