We call it AI because no one would take us seriously if we called it matrix multiplication seeded with a bunch of initial values we pulled out of our asses and run on as much shitty data as we can get our grubby little paws on.
I am really, really, REALLY irritated by what I just saw. The #ImageDescription function of Microsoft's #Bing is outright lying to people with vision impairments about what appears in images it receives. It's bad enough when an #LLM is allowed to tell lies that a person can easily check for veracity themselves. But how the hell are you going to offer this so-called service to someone who can't check the claims being made and NEEDS those claims to be correct?
How long till someone gets poisoned because Bing lied and told someone it was food that hasn't expired when it has, or that it's safe to drink when it's cleaning solution, or God knows what? This is downright irresponsible and dangerous. #Microsoft either needs to put VERY CLEAR disclaimers on their service, or just take it down until it can actually be trusted.
If your educational institution is still using #Zoom, especially in light of their policy change to use/sell your content to train #LargeLanguageModels (#LLMs), it's doing the wrong thing. Digitally literate institutions (a rare & precious thing) already use #BigBlueButton (#BBB) which is #LibreSoftware & substantially better for educational applications. If you want to trial it, talk to us - we've been making our instances available for institutions to use since Covid: https://oer4covid.oeru.org
"Very broadly speaking: the Effective Altruists are doomers, who believe that #LargeLanguageModels (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race."
Quote @pluralistic
"Spicy autocomplete" is sooooo the best way to best describe what "AI" actually is. Nothing is intelligent, just artificial.
This 👇 https://fediscience.org/@steve/111286741688934024 @steve : It’s important to keep reminding ourselves that so-called #AI#LargeLanguageModels are nothing more than very fancy bullshit generators. They know nothing about truth... just the patterns of how humans use language.
So every time you hear about them being used (successfully) for class assignments, grant applications, legal briefs, newspaper articles, writing emails, etc, it really just means all these venues are places where we’ve come to expect bullshit.
Pagan authors impacted by AI services like ChatGPT ~ Pagan authors have been impacted by the rise of AI generated Pagan content. TWH speaks with several authors whose works were used to train AI systems without their knowledge or consent.
Thank you to Deborah Blake, Markus Ironwood, Ivo Dominguez Jr., Michael M. Hughes, and Michelle Belanger who spoke with us for this story.
Fake Intelligence is where we try to simulate intelligence by feeding huge amounts of dubious information to algorithms we don’t fully understand to create approximations of human behaviour where the safeguards that moderate the real thing provided by family, community, culture, personal responsibility, reputation, and ethics are replaced by norms that satisfy the profit motive of corporate entities.
On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...
Our poster at the #SemTab challenge corner is ready & Duo Yang will explain all spicy details of our solution which combines #heuristics & #LargeLanguageModels to understand entities, types & relationships of tables with very promising results! #iswc2023
Also, no I will not join your research project looking at how #LargeLanguageModels and #GenerativeAI can help solve climate change. There is no possible world in which that makes any sense.
Everybody’s talking about Mistral, an upstart French challenger to OpenAI (arstechnica.com)
On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...