@diazona@misty
An accurate description of Large Language Models if you delete "hyper-intelligent" and replace "everything about everything" with "nothing about anything"
@diazona@resuna@misty
As a child I would use big words I heard on television because it impressed my parents but I was faking that I knew what they meant. I was just using words such as "deficit" and "negative equity" in roughly similar contexts in which I had first encountered them. It gave the impression I knew a lot when in reality I knew nothing.
Based on the replies I've been getting for my prompts, I believe LLMs Copilot, Perplexity, and Gemini are pulling the same sort of trick.
@crapaud@r_alb@arstechnica
Would have been safer if OpenAI passed all decisions to steal a celebrity voice as microtasks for a gig worker in India to annotate.
@engravecavedave@Kurt
That's precisely the problem with the anthropomorphising term "hallucination" to describe the "alternative facts" generated by Large Language Models. It is always generating a remix of its training data -- whether humans interpret that output to be truthful or nonsense is largely down to luck. It is highly dependent on the sparsity of the training examples in the area of the latent space that your prompt got mapped to by the Transformer neural network.
Amused at how Altman helped himself to a woman who denied him multiple times, because he was fond of her*, and despite the fact that she was literally the only person in recent history to sue Disney and win - and that no one else in his circle tried to dissuade him (or had enough pull or made enough effort to be successful).
Says a lot about the people at the helm of the "AI revolution".
Will we honestly talk about the trickery in the “Be My Eyes Accessibility with GPT-4o“ video? Like the taxi that uses the signal way before the passenger signal and actually basically passed the signalling passenger before the signal and still coming to a stop? Or that we don’t see any real processing time? Or that the voice is clear despite standing in London with speakers on? (1/3)
@yatil@Lottie@pixelate
Just check the video yourself. At 0:46 the taxi has already put its left turn signal on to indicate an intention to pull over. A full 3 seconds later at 0:49 the blind man extends his arm signalling to the driver. He cannot see that the taxi was already going to pull over. The voice over of GPT4o doesn't tell the blind man that the taxi was already going to stop next to him so he didn't need to do anything. He is fully convinced that no one else flagged down the taxi