A study that confirms what I’ve been suspecting for a while: fine-tuning a #LLM with new knowledge increases its tendency to hallucinate.
If the new knowledge wasn’t provided in the original training set, then the model has to shift its weights from their previous optimal state to a new state that has to accommodate both the previous and new knowledge - and it may not necessarily be optimal.
Without a new validation round against the whole previous cross-validation and test sets, that’s just likely to increase the chances for the model to go off the tangent.
“Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.””
You already know not to take an AI chatbot seriously. But there may be reason to be even more cautious. New research has found that many AI systems have already started to deliberately present human users with false information. Science Alert explains why "AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception.” https://flip.it/ZbnJtj #Science#AI#ArtificialIntelligence#Chatbot#Tech
What we have at the moment isn't AGI and therefore its capacity for genuine harm is (I believe) limited. However, the idea that we seem to be teaching these things to deceive by accident is concerning.
#Apple#iPadPro#AI#iPad: "All that stuff — the paint, the piano, the trumpet, the arcade machine, the illustrator's table — do you feel any hostility toward it? Do you want to see it destroyed and symbolically turned into an Apple device? Does it give you any satisfaction to see record players annihilated, and cameras all squished, and crumbly, and exploding?
By signing up you agree to our Terms of Use and Privacy Policy.
And to switch things around a bit, take a look at your nearest Apple device and think about the last time you fantasized about that thing getting crushed. Was it yesterday? Maybe it was five minutes ago. In any case, you probably like it less than you like, say, your guitar.
Almost exactly 40 years ago, Apple released its most famous ad, "1984," in which a monochrome society of shambling drones is under the spell of some kind of computerized dictator. The prisoners of this terrible society are then liberated from their monotony by a hammer-throwing savior representing the Macintosh computer, and a glorious, colorful future is unleashed.
Fast forward 40 years, and Apple is the most valuable company in the world, releasing a commercial in which symbols of creativity, color, joy, human passion, and playfulness are piled into the center of a grey concrete void, and crushed by an industrial machine until they become a little Apple-branded rectangle.
I’m glad Apple isn’t going to foist another LLM like ChatGPT or Claude on the world. Using machine learning for small specific tasks and increasing the number of ML-enhanced tasks until there is nothing left to enhance is a better way to go.
If I want an LLM I’ll use Claude, or ChatGPT.
“Apple Will Revamp Siri to Catch Up to Its Chatbot Competitors
Apple plans to announce that it will bring generative A.I. to iPhones …”
I reflexively cringed when I saw this ad. Not only is it wasteful, the message sent across appears to be that Apple literally crushes all the things you like to make the new iPad.