Absolutely unbelievable but here we are. #Slack by default using messages, files etc for building and training #LLM models, enabled by default and opting out requires a manual email from the workspace owner.
I finished the audiobook edition of Co-Intelligence by Ethan Mollick. It is an excellent overview on how LLMs will likely evolve and be used in business, education, and more generally. The audiobook edition is fantastic. And at only 4.5 hours long. It's not much longer than some podcasts that get released these days! Highly recommended.
The hype and utility about LLMs are overstated and will cause problems due to leadership teams in organisations (and Governments) buying into the hype.
However, they do have value as personal assistants, research assistants, and sounding boards as long as you treat all LLM output critically, especially on topics where you are not an expert.
I’m using Claude 3 Opus as a research assistant. It’s read more of the world’s info than I ever will. I am also trying out ChatGPT-4o.
Back when #BigData was the fashionable buzz word, I repeatedly had to explain to enthusiasts that archaeological data are not just Big, they are Confused and Patchy and Hairy.
I can't really see how the current generative algorithms could make me obsolete or even speed up much of the work I do. Because I'm in this really niche activity with no commercial potential that demands constant engagement with wildly non-standardised data as well as creative writing about them.
"The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?" (From an NYT article. See original thread.)
Do you REALLY want to get a feel for how GPT-4o does what it does? Just complete this poem — by doing so, you’ll have performed a computation similar to the one it does when you feed it a text-plus-image prompt.
The computation you perform is embedded with a billion years of evolution of pack animals, three brain layers mixing complex survival instincts and the capacity for cultural evolution.
Your process includes responsibility and most importantly, a grasp for societal consequences. Nothing even close to this happens for #LLM’s.
I understand you wish to make it simple and help people. But machine and human computation aren’t the same.
Watching #GoogleIO and there are some cool demonstrations of data center cloud computing, but there's also this fog of dystopia surrounding these demos.
The announcements for search are horrifying. Google is full mask off.
Phrases like "search for something, and we'll collect all this data for you" basically equates to:
"We sucked up ALL the data from people who really did the work, and we're going to give you the results of their hard work, but we wont take you to the site that generated the data. You can stay on the search page, and the site's traffic will plummet."
I don't personally think LLMs will ruin everything, but neither do I think they will solve everything. Despite being in the tech world, I've been skeptical of many of the applications in which they've made an appearance in the past 18 months.
This is the cherry on top. You can no longer avoid it if you're a Google user. And even worse, its hallucinations will displace reliable but smaller sources of info.
Looks like today I finally found a good application for #llm 's: Learning languages!
I've been attempting to learn #arabic through duolingo for a while now, without much success. I figured if there's one thing language models should be good at, it's languages. So far the thing has actually been pretty helpful.