#Hacker's can read private #AI assistant chats even though they’re #encrypted.
All non-Google chat #GPT's affected by side channel that #leak's responses sent to #user's.
So #Steeve got a major upgrade recently. He moved from a #gptneo (2.4B) model to a #llama2 (7B) model. Trained on 300k messages from our private chat history, Steeve is way more capable of following the conversation now. He used to have some "favorite phrases" he would say a lot, and I'm seeing less of that. His vision and reading models also got upgraded, so he gets more detail about the links and memes we share. Long live Steeve! :steeve:
LLM Agents can Autonomously Hack Websites.
"Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not. Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs."
But opensource models will reach GPT-4 levels in the very near future so be prepared. https://arxiv.org/html/2402.06664v1?s=09 #ai#llm#cybersecurity#gpt
#Shaarli: gparted - How to prepare a disk on an EFI based PC for Ubuntu? - Ask Ubuntu - Comment préparer un disque pour démarrer en EFI (donc formaté en GPT au lieu de MBR).
TL;DR : table de partition en GPT, partition 512 Mo en FAT32 avec flags esp+boot, partition système, autres partitions.
Prévoir une autre partition vide pour Windows éventuellement + bootrepair après l'installation de Windows.
Does anyone have a good list of logical questions to judge large language models ability to reason?
Questions like "if it takes 3 hours for 3 towels to dry, how long does it take for 9 towels to dry?"
I'm playing around with Mistrals leaked 70b Miqu LLM and want to test it's reasoning skills for a project I'm working on. I've been really impressed so far. It's slower than Mistral & Mixtral but it's been producing the best reasoned answers I've seen from an LLM. And it's running locally!
I'm trying to figure out if this person has blatantly copied part of my blog post without any attribution or if all their posts are GPT generated and they don't even bother to read them before publishing.
Smaug-72B, a Qwen-72B-based open-source #LLM released by Abacus #AI, tops the Hugging Face Open LLM leaderboard and outperforms #GPT-3.5 on several benchmarks
Next week I'll be starting a pretty ambitious project—50 Days of LIT Prompts. Every weekday for 10 weeks, I'll be sharing prompt patterns along with my thoughts and readings relating to Large Language Models like those behind #ChatGPT. Follow the link below, and this thread, for updates: https://sadlynothavocdinosaur.com/posts/50-days-of-lit-prompts/
I figured generating a headline is kind of the apotheosis of this week's prompts. I mean, ideally, it feels like a headline is a distillation of a text's essence.