"I believe that artificial intelligence has three quarters to prove itself before the apocalypse comes, and when it does, it will be that much worse, savaging the revenues of the biggest companies in tech.", predicts Ed Zitron.
The consistent theme here is that they all want little regulation. They don't want the others to be entrenched.
A profile of Mistral AI CEO Arthur Mensch, who says, as an atheist, he is uncomfortable with Silicon Valley's #"AGI rhetoric" and "religious" fascination with #AI.
Google touting that its latest #AI models and services can be grounded through its search results isn't the boast it thinks it is, especially considering the quality of its results lately. Has anybody considered the feedback loop of AI results being ranked hire and then being used to ground Gemini Pro?
Move over, deep learning: Symbolica’s structured approach could transform #AI
Artificial intelligence startup Symbolica emerged from stealth today and unveiled a novel approach to constructing AI models, leveraging advanced mathematics to imbue systems with human-like reasoning capabilities and unprecedented transparency.
“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.
It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.
I guess we wait this one out until the “AI” bubble bursts due to the incredible subsidization the entire industry is undergoing. It is not profitable. It is not sustainable.
It will not last—but the damage to our planet and fallout from the immense amount of wasted resources will.
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
What are some things educators might think about when using #genAI tools, especially when thinking about student/instructor/"user" privacy?
I was recently on this panel, where we shared a few different approaches to this. (Spoiler: I'm the skeptic who brings a #mediaLiteracy approach to these new tools.)
We are told #genAI is going to change everything...
...but every #AI "use case" so far has direct and transparent lineage from existing malignant practices.
Offshoring, algo-washing, information warfare, plagiarism, content mills and SEO spam, phishing/impersonation, revenge porn, hoodwinking investors, avoiding responsibility for management decisions, and blindly copy-pasting code from Stack Overflow.
♻️ AI Companies Running Out of Training Data After Burning Through Entire Internet | Futurism
「 As the Wall Street Journal reports, some companies are looking for alternative sources of data training now that the internet is growing too small, with things like publicly-available video transcripts and even AI-generated "synthetic data" as options 」
Who knew an article on printers could be so entertaining in 2024. Enjoy this brief piece about #genAI search #enshittification (and printers … sort of) for breakfast
"The central claim of the tech companies selling LLMs is that any work that people do that results in text artifacts is just “text in-text out” and can therefore be replaced by their synthetic text-extruding machines."
@baldur If #GenAI has a place in education at all, it’s to teach us some things we already should have known:
That text has no inherent value or meaning
That making a text longer (or shorter) has no inherent impact on its value
That vocabulary and grammatical correctness are tangential to value
Which means that in approximately all situations where an #LLM is used, the prompt is more valuable than the output. Once we recalibrate culturally, LLMs will be seen as worse than useless.
Trying to get my head around the GenAI APIs popping up all over lately. What's a good use-case for wrapping calls to ChatGPT in your own application?
For instance, the Spring AI's for calling various LLMs. Seems like you could make cool party tricks of creating poems, maybe creating content for your marketing, maybe images based off your data... But is any of it useful?
Feels like these marketecture articles about "10 ways to use GenAI in Banking" (or insert your domain here) can all be boiled down to "we can help you create marketing content".
In all other areas -- the downside of generating inaccurate info vastly outweighs the benefits, right? #genai
My overall impression of the utility of #genAI so far hinges on two things: 1. hallucinating is the point, and 2. how copyright claims shake out (only human-created works can claim copyright, and no, prompt engineering doesn't count)
The intersection where you want (or can tolerate some) hallucination but don't want copyright is probably pretty small.