@therealjimlove I'm a little puzzled at why you'd think a dramatic-re-enactment sheds light on the issue. This is settled: #AGI fantasists like LeMoine are mystics, not scientists, & what they're doing is #MagicalThinking. Dramatic re-enactments actually just trowel on more layers of magickal glamour.
Artificial General Intelligence (AGI) is now discussed (and worried about) more than ever. But is AGI really possible? What assumptions are behind the idea that it is? How plausible are they?
On May 30th, 3.30pm-5pm (CET), philosopher Mazviita Chirimuuta (Edinburgh) will discuss these and related questions here at Umeå University.
So many people talk about ensuring that #AI and #AGI are programmed to not eradicate humanity, and almost no one talks about becoming the kind of species that artificial intelligences would want to work and chill with.
The media is obsessed with the story of Geoffrey Hinton, the “godfather of AI” who left Google to warn us about his life’s work.
Hinton isn’t worried about the immediate harms of AI and even dismisses those concerns as not “existentially serious.” But his warnings are nothing but sci-fi fantasies that distract us from real problems. For Disconnect, I argue we should ignore him.
I'm not arguing w the fact that AI poses risks. I AM ceaselessly annoyed by the pattern
This is not new or novel. It was women - @timnitGebru, @mmitchell_ai, me, et al - who rang the AI alarm years ago & were retaliated against, pushed out for doing so.
Almost every week now, + despite statements to the contrary, by many #AI#scientists and #programmers, the utopias of #IsaacAsimov and #PhilipKDick (+ others 1)) are making a leap forward.
Due to all the white noise + the hype regarding #AI most of the general public.
...will make the potential learning curve of #AGI's a lot steeper. Why? Well, for three reasons:
i/Because after digesting most of the world's information from the #internet and other online sources, for lack of similar databases of #alien species, there is not much more knowledge to be accumulated ("only" to be processed differently, which is not as "steep").
the #InternetOfThings will provide "almost" infinite (and increasing) data points to be processed.
...more importantly, #AI can finally learn to differentiate between fiction and reality.
This said, I am "finally" being joined by renowned #academic from #ComputerSciences in my skepticism, which coincides with my stance (though I do not agree with his choice of fiction, see boost of older thread in the follow-up):
...on human society. In a new essay for #Time, he rings the alarm bells, painting a pretty dire picture of a future determined by an #AI that can outsmart us.
"Sadly, I now feel that we're living the movie 'Don't Look Up' for another existential threat: unaligned #superintelligence," #Tegmark wrote, comparing what he perceives to be a lackadaisical response to a growing #AGI threat to director Adam #McKay's popular climate change satire...
A recent survey*..."
Even without #AGI, "...the current crop of less sophisticated #AIs already poses a threat, from #misinformation-spreading synthetic content to the threat of AI-powered #weaponry...
Although #humanity is racing toward a cliff, we're not there yet, and there's still time for us to slow down, change course and avoid falling off – and instead enjoying the amazing benefits that safe, aligned #AI has to offer,"..."
Listened to a podcast this morning where the host was talking about "cognitive abilities" of LLMs and the guest objected. When pressed for a better alternative, the guest offer "capabilities" but I think that's still an overstatement.
@Dogzilla@emilymbender they are not like us because we have not begun to try to make them like us. They are still like all the other machines which only perform the functions they were designed to perform.
LLMs and all AI in industry are not in the category of #AGI, which is a speculative research topic as yet.
EDIT: the word "hallucination" is probably out of place and misleading here, as @apodoxus argues in this thread. The anthropomorphic connotations of the word may actually create more misconceptions than help eliminate.
Should companies be responsible for taking an ethical approach to developing technology, such as ensuring it does not increase inequality?
I would have thought "of course" they are responsible. But it appears that some think that we should develop technology and leave the consequences to governments to sort out! #AI#AGI#LLM#ArtificalIntelligence
Amen to this idea: "Let's have cross-disciplinary conversations so more people can spot these problems."
In Sydney today a symposium will be held where experts from a wide variety of disciplines such as economics and arts will discuss how to reduce the risks of #AI - ChatLLM23
Listening to very smart people talk about #GPT4 I'm reminded of the joke about a checkers-playing dog.
A guy has a dog that plays checkers. "My goodness," everyone says, "that's amazing. What a brilliant dog!"
"Not really," he replies, "I beat him four games out of five."
That's GPT4. It's capacities are amazing and completely unexpected.
But it's also so limited. You shouldn't back the dog in a checkers tournament, and you shouldn't use an LLM as a medical assistant or in many other ways.
@ct_bergstrom A dog as a living being has intelligence, but #chatgpt being a object has not intelligent. Its an inanimate object without thought, emotion or any feelings. Chatgpt is also not an #ai or #agi as many people call it, its an large language model meaning it is trained on a large languge as its base and can only answer our question from that dataset it has access to.
"It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet." I am riveted by the extensive documentation of how ChatGPT-powered Bing is now completely unhinged. @simon has chronicled it beautifully here: https://simonwillison.net/2023/Feb/15/bing/
...I am not sure if "The Fifth Law of Robotics" by Nikola #Kesarovski,
"A robot must know it is a robot" (also a book title*), really can be a viable solution to this problem. We all know how the concept of #slavery turned out for humanity: to this day, it suffers from this crime.
...it'll have human #bias. Humans have always been great at bending or breaking the law when it suited their interests. How could a #Superintelligence created with human values not arrive at the same, self-preserving conclusion?
A gloomy, yet, IMO, quite fitting assessment of the shape of things to come unless there's a #Chernobyl-style "fallout" before #GAI evolves into #AGI + humanity gets its act together and, as Prof. #Tegmark admonishes: "Just look up!"
If we look at the international situation today: #ClimateCrisis, wars, one small parties of humanity living relatively well in a #PostColonial world order, it would take any #GAI I've read about in #SciFi but a split second to determine what's the root of the problem: #humanity.
And then, no...