On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...
Fake Intelligence is where we try to simulate intelligence by feeding huge amounts of dubious information to algorithms we don’t fully understand to create approximations of human behaviour where the safeguards that moderate the real thing provided by family, community, culture, personal responsibility, reputation, and ethics are replaced by norms that satisfy the profit motive of corporate entities.
"Very broadly speaking: the Effective Altruists are doomers, who believe that #LargeLanguageModels (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race."
Quote @pluralistic
"Spicy autocomplete" is sooooo the best way to best describe what "AI" actually is. Nothing is intelligent, just artificial.
AI biases, “hallucinations” and the larger implications ~ Correspondent Star Bustamonte continues a series of articles exploring how AI and large language models are impacting Pagan publishing.
Our poster at the #SemTab challenge corner is ready & Duo Yang will explain all spicy details of our solution which combines #heuristics & #LargeLanguageModels to understand entities, types & relationships of tables with very promising results! #iswc2023
Cometh the weekend, cometh the #linkdump. My daily-ish newsletter includes a section called "Hey look at this," with three short links per day, but sometimes those links get backed up and I need to clean house. Here's the eight previous installments:
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
The AI bubble is indeed overdue for a popping, but while the market remains gripped by #IrrationalExuberance, there's lots of weird stuff happening around the edges. Take #InjectMyPDF, which embeds repeating blocks of invisible text into your resume:
Something I see over and over again when people "debate" about using LLMs for whatever application is people hand-wringing over the accuracy and efficacy of the output. "What if it gets something wrong?!"
It's like people have forgotten that they have the ability to double check its output. Blindly using most tools is a bad idea.
The answer to "What if it's wrong?" is "You fix it", numbskull. Proofread.
Helping someone debug something, said they asked chatgpt about what a series of bit shift operations were doing. He thought it was actually evaluating the code, yno like it presents itself as doing. Instead its example was a) not the code he put in, with b) incorrect annotations, and c) even more incorrect sample outputs. Has been doing this all day and had just started considering maybe chatGPT was wrong.
I was like first of all never do that again, and explained how chatGPT wasnt doing anything like what he thought it was doing. We spent 2 minutes isolating that code, printing out the bit string after each operation, and he immediately understood what was going on.
I fucking hate these LLMs. Empowerment is learning how to figure things out, how to make tools for yourself and how to debug problems. These things are worse than disempowering, teaching people to be dependent on something that teaches them bullshit.
Edit: too many ppl reading this as "this person bad at programming" - not what I meant. Criticism is of deceptive presentation of LLMs.
@PiTau@jonny #LargeLanguageModels are exquisitely bad at anything for which there is very little human curated training data. I asked GPT-4 via #BingChat to generate Vult-DSP for a fairly basic MIDI synthesizer and it very confidently spat out nonsense
This 👇 https://fediscience.org/@steve/111286741688934024 @steve : It’s important to keep reminding ourselves that so-called #AI#LargeLanguageModels are nothing more than very fancy bullshit generators. They know nothing about truth... just the patterns of how humans use language.
So every time you hear about them being used (successfully) for class assignments, grant applications, legal briefs, newspaper articles, writing emails, etc, it really just means all these venues are places where we’ve come to expect bullshit.
It’s important to keep reminding ourselves that so-called #AI#LargeLanguageModels are nothing more than very fancy bullshit generators. They know nothing about truth. They just know the patterns of how humans use language.
So every time you hear a story about them being used (successfully) for class assignments, grant applications, legal briefs, newspaper articles, writing emails, etc, it really just means all these venues are places where we’ve come to expect and accept bullshit.
Everybody’s talking about Mistral, an upstart French challenger to OpenAI (arstechnica.com)
On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej...