Back in 2022, Anthropic CEO Dario Amodei chose not to release the super-powerful AI chatbot, Claude, that his company had just finished training, opting instead to focus on further internal safety testing. That move likely cost the company billions — three months later, OpenAI launched ChatGPT.
Having a reputation for credibility and caution in an industry that appears to have thrown a large chunk of it to the wind is not a bad thing though. Claude is now in its third iteration, but that caution remains, with the company pledging not to release AIs above certain capability levels until it can develop sufficiently robust safety measures.
TIME’s interview with Amodei gives an insight into what the AI industry might look like when safety is considered a core part of the strategy.
It’s #NewstodonFriday — such a shame it’s been a slow news week! Just to refresh your memories, this is a day to feature work from newsrooms with an active presence in the #Fediverse. If you like what you see in the (long!) thread below, follow the profiles and boost their stories. If you’re a journo or newsroom that we don’t know about or if there’s a newsroom you’d love to put on our radar, please let us know in the comments below.
@themarkup reports on how Chinese migrants are following step-by-step instructions on Douyin, the equivalent of TikTok, in order to immigrate to the United States via Central South America. These include tips on clearing your chat history, bribing law enforcement officers and more. On arrival in the US as asylum seekers, they find a reality they were unprepared for.
Over the last few years the practice portion of my #legal work has slowly transitioned to consulting for other attorneys on #tax or #tech matters and let me tell you, for all the stick our profession gets we don’t make terrible clients.*
*Results not typical, your mileage may vary, see stores for details.
In Abu Dhabi, the Autonomous Racing League has been testing driverless motor racing, and last month 10,000 people descended on the Yas Marina race track to watch the first four-car driverless race. Hazel Southwell was one of them, and she reports on it for @arstechnica.
From the cars to the tech to the competition, she got a front row seat for all the thrills and spills from the inaugural race. Is it the future of motorsport? Probably not, she writes, but is “strangely exciting” and an “interesting test lab” all the same. Here’s more.
“When I say ‘I am hungry’, I am reporting on my sensed physiological states. When an LLM generates the sequence ‘I am hungry’, it is simply generating the most probable completion of the sequence of words in its current prompt.”
Two new studies published in the journal of Science this week offer a deeper insight into the spread of misinformation on social media, offering evidence that it not only changes minds, but that a small group of committed “supersharers” — predominately older Republican women — were responsible for the vast majority of the “fake news” in the period looked at.
The studies, by researchers at MIT, Ben-Gurion University, Cambridge and Northeastern, were independently conducted but complement each other well. @TechCrunch has more.