I guess making money somewhat honestly by having customers that actually pay for a service with at least some guarantees of privacy and safety is not as lucrative as having an open platform network where people are tricked into giving out all their data while they are spied upon for whatever reasons.
I just finished a productive Copilot session on a complex programming task. I came up with much of the algorithms, and wrote a lot of the code, and had to guide it a lot throughout, but credit where due, Copilot did make small but meaningful contributions along the way.
Overall, not a pair programmer but someone useful to talk to when WFH alone on complex tasks.
Enough for Copilot to earn a ✋🏽. And I like how it responded to that. It has got that part down. 😉
The MLX is Apple's framework for machine learning applications on Apple silicon. The MLX examples repository provides a set of examples for using the MLX framework. This includes examples of:
✅ Text models such as transformer, Llama, Mistral, and Phi-2 models
✅ Image models such as Stable Diffusion
✅ Audio and speech recognition with OpenAI's Whisper
✅ Support for some Hugging Face models
•This• is the compelling #LLM use case for me. If I use a translator to write messages in French I'm not forced to come up with an initial attempt and I lose the learning aspect of that.
If instead I put something into ChatGPT and it not only corrects but explains what my mistakes were that's a huge win in terms of learning from your mistakes.
(I still don't trust the thing 100% but it's also not a high stakes situation – I'm not engaging in a nuclear arms treaty after all 😅)
Fix your shitty autocorrect! There’s no such thing at “there’re” so quit putting it into my content.
And how come I get a word suggestion as I type, I click on it, and an entirely different word is inserted that wasn’t even one of the options offered - sometimes not even an English word?!
Absolutely unbelievable but here we are. #Slack by default using messages, files etc for building and training #LLM models, enabled by default and opting out requires a manual email from the workspace owner.
If you pick up one of the #Nvidia Orin boards, definitely get an SSD to go along with it. While it can run off an SD card, you’re going to run out of space quickly, and you’ll see a performance hit on complex tasks (like running a local #LLM). #EdgeAI#ai
To those concerned about #slack now using your chats, including trade secrets, NDA stuff, etc., to train their #llm: #WTF did you expect using a third party with full content access to discuss those things? That they'd be gentlemen and not read your mail? That they somehow wouldn't try to find a way to monetize that juicy data? I am flabbergasted that people working for corporations just as immoral could have been that naive...
The hype and utility about LLMs are overstated and will cause problems due to leadership teams in organisations (and Governments) buying into the hype.
However, they do have value as personal assistants, research assistants, and sounding boards as long as you treat all LLM output critically, especially on topics where you are not an expert.
I’m using Claude 3 Opus as a research assistant. It’s read more of the world’s info than I ever will. I am also trying out ChatGPT-4o.
Back when #BigData was the fashionable buzz word, I repeatedly had to explain to enthusiasts that archaeological data are not just Big, they are Confused and Patchy and Hairy.
I can't really see how the current generative algorithms could make me obsolete or even speed up much of the work I do. Because I'm in this really niche activity with no commercial potential that demands constant engagement with wildly non-standardised data as well as creative writing about them.
"The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?" (From an NYT article. See original thread.)
Do you REALLY want to get a feel for how GPT-4o does what it does? Just complete this poem — by doing so, you’ll have performed a computation similar to the one it does when you feed it a text-plus-image prompt.
Watching #GoogleIO and there are some cool demonstrations of data center cloud computing, but there's also this fog of dystopia surrounding these demos.
The announcements for search are horrifying. Google is full mask off.
Phrases like "search for something, and we'll collect all this data for you" basically equates to:
"We sucked up ALL the data from people who really did the work, and we're going to give you the results of their hard work, but we wont take you to the site that generated the data. You can stay on the search page, and the site's traffic will plummet."