“If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than #OpenAI’s #ScarlettJohansson debacle”
He puts into clear terms what had previously been an unarticulated, creeping suspicion I had about #GenAI. Clearly there are many angles from which to come at what's going on with #AI#hype , but I appreciate this one quite a bit.
Over 72 Fediverse musicians!
72 Brand new original tracks!!
20+ Genres! #Fedivision2024 is almost upon us!!
Where’s the #hype?!!
Start listening and voting:
THIS SUNDAY!
19 May 2024, 1 PM UTC
Who’s excited? #Fedivision
#AI#GenerativeAI#AIEthics#ResponsibleAI#Hype: "We have been here before. Other overhyped new technologies have been accompanied by parables of doom. In 2000, Bill Joy warned in a Wired cover article that “the future doesn’t need us” and that nanotechnology would inevitably lead to “knowledge-enabled mass destruction”. John Seely Brown and Paul Duguid’s criticism at the time was that “Joy can see the juggernaut clearly. What he can’t see—which is precisely what makes his vision so scary—are any controls.” Existential risks tell us more about their purveyors’ lack of faith in human institutions than about the actual hazards we face. As Divya Siddarth explained to me, a belief that “the technology is smart, people are terrible, and no one’s going to save us” will tend towards catastrophizing.
Geoffrey Hinton is hopeful that, at a time of political polarization, existential risks offer a way of building consensus. He told me, “It’s something we should be able to collaborate on because we all have the same payoff”. But it is a counsel of despair. Real policy collaboration is impossible if a technology and its problems are imagined in ways that disempower policymakers. The risk is that, if we build regulations around a future fantasy, we lose sight of where the real power lies and give up on the hard work of governing the technology in front of us."
#AI#Influencers#InfluencerMarketing#Hype: "Like the threat behind crypto’s “have fun staying poor” slogan, AI needs the rest of us to believe in its unstoppable ascendancy because that belief is basically all it has. AI products aren’t about whether anyone wants or needs AI products. They’re about how people could want or need those products, eventually, if everyone stays the course and also keeps pumping money into AI companies. You can call a product bad as long as you immediately point out that obviously it’s going to become good (Brownlee even nods to this in his Humane review, saying that the pin is “the new worst product I’ve ever reviewed in its current state”), because AI products are less products and more promotional tools for the future, for technological advancement, for whatever other big concepts Silicon Valley goons trot out to throw a smokescreen over the barely-functional, largely useless junk they need us to believe is inevitable." https://aftermath.site/humane-ai-marques-brownlee
"I am now, of course, adding to this neverending discourse with this article. But I want to be clear: No one is under any obligation to be nice to the creators of the Humane pin or the product itself, which, even if it worked, is a gadget that relies on mass content theft and the scraping of huge amounts of human knowledge and creativity to make a product that is marketed as making us more “human.” The people making this argument are people who have a vested interest in the general public continuing to canonize, support, and spend money on a Silicon Valley vision of the future that involves the automation of everything, the displacement of huge numbers of workers, and a new, AI-led internet that has so far done little but flooded the web with low quality junk, been used to make fake porn to harass women, and has led eager beaver know nothing CEOs to prematurely lay off huge numbers of workers to replace them with AI tools built on the back of uncompensated human labor and training largely done by underpaid “ghost workers” in the developing world.
This does not mean I want every product to fail, or want for there to never be another good product again. The existence of the Humane Ai Pin is an example that even in a post-Juicero age, there is endless appetite for rich people to spend money funding people to make absurd products at great cost to everyone involved."
#AI#GenerativeAI#Hype#Blockchain: "When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs." https://www.citationneeded.news/ai-isnt-useless/
#AI#AGI#ComputerScience#Hype#Ideology: "This introductory essay for the special issue of First Monday, “Ideologies of AI and the consolidation of power,” considers how power operates in AI and machine learning research and publication. Drawing on themes from the seven contributions to this special issue, we argue that what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science. We argue that naming and grappling with this power, and the troubled history of core commitments behind the pursuit of general artificial intelligence, is necessary for the integrity of the field and the well-being of the people whose lives are impacted by AI."
Many users pay for LLM subscriptions. But the margins are small, because what companies can charge for these services is barely above the cost of running them. There is also a lot of competition between different providers. The amount of investment is just completely disproportionate; it is a thousand times too high.
Why do you think that is?
There is just a ton of hype and outlandish expectations. Newspapers are running headlines like, «all jobs will be replaced soon» – «The 2028 U.S. elections will no longer be run by humans.» There is talk of artificial general intelligence. But these LLMs are more similar to large databases.
Artificial general intelligence (AGI) refers to a program that could solve all conceivable tasks. Do you doubt that LLMs are a step in this direction?
I don't believe that LLMs bring us any closer to human-like or general intelligence. These exaggerated expectations are also due to prominent studies which claimed that AI-models performed better than humans in law and math exams. We now know that language models simply memorized the right answers." https://www.nzz.ch/english/google-researcher-says-ai-hype-is-skewing-investment-ld.1825122
@remixtures I pay for ChatGPT and get my money's worth, but I find local LLMs are almost as powerful and I can give them much largers tasks like writing descriptions of hundreds of photos. ChatGPT does an amazing job of turning handwritten pages into markdown formatted text.
@njrabit: yes, I agree that the technology can be very useful but I'm afraid the companies haven't yet been able to find a viable business model - specially because there are lots of LLMs always emerging. Have you tried this blind test -> https://chat.lmsys.org/?
I find it properly disconcerting that many academics fell for the hype when clearly, it is purely the result of a very good lobbying effort on the part of Big Tech. So just to get things straight, here’s my personal experience of what happened in the field of Natural Language Processing (#NLP), starting back in 2017 (references at the end of the thread) 1/6
#AI#Google#DeepMind#Science#Hype: "In a perspective paper published in Chemical Materials this week, Anthony Cheetham and Ram Seshadri of the University of California, Santa Barbara selected a random sample of the 380,000 proposed structures released by DeepMind and say that none of them meet a three-part test of whether the proposed material is “credible,” “useful,” and “novel.” They believe that what DeepMind found are “crystalline inorganic compounds and should be described as such, rather than using the more generic label ‘material,’” which they say is a term that should be reserved for things that “demonstrate some utility.”
In the analysis, they write “we have yet to find any strikingly novel compounds in the GNoME and Stable Structure listings, although we anticipate that there must be some among the 384,870 compositions. We also note that, while many of the new compositions are trivial adaptations of known materials, the computational approach delivers credible overall compositions, which gives us confidence that the underlying approach is sound.”
In a phone interview, Cheetham told me “the Google paper falls way short in terms of it being a useful, practical contribution to the experimental materials scientists.” Seshadri said “we actually think that Google has missed the mark here.”"
#AI#GenerativeAI#LLMs#Chatbots#Hype: "...[T]he AI hype of the last year has also opened up demand for a rival perspective: a feeling that tech might be a bit disappointing. In other words, not optimism or pessimism, but scepticism. If we judge AI just by our own experiences, the future is not a done deal.
Perhaps the noisiest AI questioner is Gary Marcus, a cognitive scientist who co-founded an AI start-up and sold it to Uber in 2016. Altman once tweeted, “Give me the confidence of a mediocre deep-learning skeptic”; Marcus assumed it was a reference to him. He prefers the term “realist”.
He is not a doomster who believes AI will go rogue and turn us all into paper clips. He wants AI to succeed and believes it will. But, in its current form, he argues, it’s hitting walls.
Today’s large language models (LLMs) have learnt to recognise patterns but don’t understand the underlying concepts. They will therefore always produce silly errors, says Marcus. The idea that tech companies will produce artificial general intelligence by 2030 is “laughable”.
Generative AI is sucking up cash, electricity, water, copyrighted data. It is not sustainable. A whole new approach may be needed. Ed Zitron, a former games journalist who is now both a tech publicist and a tech critic based in Nevada, puts it more starkly: “We may be at peak AI.”" https://www.ft.com/content/648228e7-11eb-4e1a-b0d5-e65a638e6135