The article describes an conversation of Daniels boss with #ELIZA (one of the first chatbots) which went something like this
VP: Do you think that I can use the computer this morning?
ELIZA: Why do you ask?
VP: I might be able to make some additional telcomp sales.
ELIZA: Why aren't you sure?
VP: My prospective customers have never seen a demonstration of the system.
ELIZA: Does this mean very much to you?
VP: Of course it does.
ELIZA: You seem very positive of that.
VP: This is ridiculous.
ELIZA: What do you mean this is ridiculous?
VP: Is there any reason why I can't use the computer now?
ELIZA: Tell me more about your prospective customers.
VP: Call me
Note that after that remark the VP left out the period. Therefore, of course, the computer didn't answer him. This so infuriated the VP, who thought I was playing games with him, that he called me up, woke me from a deep sleep, and said:
Why are you being so snotty with me?
What do you mean why am I being snotty to you?
The VP angrily read the dialog that 'we' had been having, and couldn't get any
response but laughter from me. It took me a while to convince him it really was the
computer.
#AI#GenerativeAI#Chatbots#GPT4#Eliza#TuringTest: "The experiment involved 652 participants who completed a total of 1,810 sessions, of which 1,405 games were analyzed after excluding certain scenarios like repeated AI games (leading to the expectation of AI model interactions when other humans weren't online) or personal acquaintance between participants and witnesses, who were sometimes sitting in the same room.
Surprisingly, ELIZA, developed in the mid-1960s by computer scientist Joseph Weizenbaum at MIT, scored relatively well during the study, achieving a success rate of 27 percent. GPT-3.5, depending on the prompt, scored a 14 percent success rate, below ELIZA. GPT-4 achieved a success rate of 41 percent, second only to actual humans.
GPT-3.5, the base model behind the free version of ChatGPT, has been conditioned by OpenAI specifically not to present itself as a human, which may partially account for its poor performance. In a post on X, Princeton computer science professor Arvind Narayanan wrote, "Important context about the 'ChatGPT doesn't pass the Turing test' paper. As always, testing behavior doesn't tell us about capability." In a reply, he continued, "ChatGPT is fine-tuned to have a formal tone, not express opinions, etc, which makes it less humanlike. The authors tried to change this with the prompt, but it has limits. The best way to pretend to be a human chatting is to fine-tune on human chat logs.""
OK, here we go: on a podium on AI in journalism this Friday I'll be the one with the "Oh, that's not really exciting. That's just #ELIZA on steroids" statement
'National Eating Disorder Association made the bold move of dissolving its eating disorder hotline...to replace the hotline staff with a chatbot named Tessa. The mass firing occurred only four days after employees unionized...After less than a week of using Tessa, NEDA shuttered the chatbot. According to a post on the nonprofit’s Instagram page, Tessa “may have given information that was harmful and unrelated to the program.”'
"Paris Marx is joined by Ben Tarnoff to discuss the #ELIZA chatbot created by Joseph #Weizenbaum in the 1960s and how it led him to develop a critical perspective on #AI and computing that deserves more attention during this wave of AI hype."
Out now: The 2nd edition of Marianna Baranovska-Bölter's and my book "HELLO, I'M ELIZA"! Most of the chapters had been edited, some extended and an all-new chapter about the searching and finding of #Weizembaum's original #ELIZA#SourceCode had been added - as well as a faksimile of this code.
This is a beautifully-written, haunting, ambiguous and resonant exploration of one of the founding fathers of #AI - Joseph Weizenbaum - and the demons that drove his work in early #ConversationalAI with #Eliza. As Weizenbaum rightly asserts, our context, our history, our experience, shapes our relationship with, and toward, technology.
"Yet, as Eliza illustrated, it was surprisingly easy to trick people into feeling that a computer did know them – and into seeing that computer as human. Even in his original 1966 article, Weizenbaum had worried about the consequences of this phenomenon, warning that it might lead people to regard computers as possessing powers of “judgment” that are “deserving of credibility”. “A certain danger lurks there,” he wrote."
A certain danger lurks there. As applicable now in an age of #GenerativeAI as it was in the 1960s.
Thank you @bentarnoff for such an incisive piece. h/t to @CriticalAI for bringing it to my attention.
I kinda wish higher ed had bothered to push faculty to master the past decades worth of tech. Before they had to wrestle with next gen #stochasticparrots that have huge cultural legal and ethical implications we havent fully sorted yet.
Im truly terrified how frequently reports of chatgpt use is becoming in online courses (I feel like I hear new anicdotes daily), based on discussions from colleagues doing learning management support.
This is concerning faculty by the way, not the usual "students will cheat" shenanigans you mostly hear in media. #eliza#stochasticparrots
Lo and behold: the second edition of Marianna Baranovska-Bölter's and my essay collection on #Weizenbaum's #ELIZA is "in print" now! It should be out in july!
Joseph Weizenbaum, the author of the famous #Eliza chatbot was somewhat startled and dissapointed by the world's reaction to his software.
This prompted him to write the 1976 book "Computer Power and Human Reason: From Judgment to Calculation"
I found a PDF last night and started reading it. A lot of it is applicable if not a remarkable mirror for the #StochasticParrot mania we find ourselves in.
@kentindell Simple is the false adjective, in my opinion. In a world where the degree of loneliness rises to astonishing heights, having the feeling that someone is actually listening to you, in this case ELIZA, is a very powerful force, which can never be underestimated. #Eliza#Weizenbaum
If Weizenbaum’s cautions settled around one idea, it was restraint. “Since we do not now have any ways of making computers wise,” he wrote, “we ought not now to give computers tasks that demand wisdom.”