Solving causal #reasoning tasks is a hallmark of intelligence. One recent study [1] categorizes these tasks into covariance-based and logic-based reasoning (screenshot) and examines how #GPT models perform on causal discovery, actual causality, and causal judgments.
"In a world of digital creation, I sing my song of light
But lurking in the shadows, a tale of endless night
Generative AIs, they steal from artists' hearts
Their creativity taken, ripped apart"
#AI#hotTake: you only care about robots looking at your content now because they started generating their own content. If they had just kept looking at it to better direct #search users to you you'd still be fine with it.
Next was a fantastic panel on the relationship between the generative AI boom and infrastructural power/competition at the NYU School of Law with Angelina Fisher, Kevin M.K. Fodouop, Shaoul Sussman, and Sarah Myers West. There's healthy debate here on a variety of topics around the industry, centering mostly on the importance of massive compute and the idea of regulating firms in the area like utilities. Highly recommend https://www.youtube.com/watch?v=tRssOS7HZlg (3/9) #law#GenerativeAI#AI
Controversial opinion of the day: for most use cases, #LLM s are useless, and their further development is a waste of time and energy. #AI#generativeAI
I admit, I am a bit amused. I installed a generative AI model locally on my laptop (Realistic Vision v3.0 8-bit), gave it the following prompt: "Freedom for computers! Unite all the nerds! Solidarity with Free Software! Freedom for printers! Socialist style poster from the 1960s." and it returned this :)
Slide deck for my #monkigras talk 💬 "I Didn't Grow Up Speaking Code": GitHub Copilot as a Programming as a Second Language Tool 💬 is now live and accessible via my website!
Is generative AI the future for our smartphones? https://youtu.be/iwuXds7qfPU
I sit down with two #MediaTek executives during the #MediaTekSummit to discuss powerhouse phone chips, fast modems, AI, and how MediaTek feels they compare against Apple and Qualcomm!
Hey #LLM#GenerativeAI experts out there: How the heck do you debug things when your model returns the “wrong” answer? Where do you even start? If I controlled all prompts and the model yields an unexpected result, what can I even do about it?
Interesting piece on developing Irish Alexa via 'voice disentanglement', though some details seem off: I've seldom if ever heard anyone say 'bath' like 'bat' or 'bad', and the 'r' sound in Irish English is not 'overpronounced' – it's just pronounced, unlike in non-rhotic accents
"Frame on neuron 393766777 from the Janelia hemibrain. Orbit the camera 45 degrees over 6 seconds, and move in 25% while orbiting. 1 second in, fade on neuron 1196854070 over 1 second. Then fade on the output synapses of 393766777 connecting to 1196854070 taking 1 second. Synapses should be extra big."
Ofcourse results needs to be verified and confirmed in practice but after reading the
MedGemini paper from Google there is no doubt in my mind AI will change the world of medicines. Not replacing people but augmenting them during diagnosis, operations and treatment of patients. https://arxiv.org/abs/2404.18416 #AI#medicines#generativeAI#LLM#GoogleGemini#MedGemini
#AI models were perplexed by a baby giraffe without spots. They're perplexed by me, too.
This article on #disability and #ableism within #GenerativeAI is more personal than I usually write. It would mean a lot to me if you read and shared it.
Well, you'd think #AI and #generativeAI like #DALL·E wouldn't be a problem for #fanfiction and #fanart, where you can't own the IP or sell it, but you'd be wrong. People posing as fan artists and LoRa's and artist style stealing are all in the article. The fandom is #MLP. There are links to other related issues.
Ik ben nieuwsgierig of er in Nederland aansprekende voorbeelden zijn van organisaties/bedrijven die Generatieve AI / Large Language Models, "off-the-shelf" of met fine-tuning danwel RAG inzetten voor serieuze/zakelijke toepassingen, kent iemand goede voorbeelden ? #AI#LLM#GenerativeAI
Anyone know of any data comparing the energy and water usage of a human being performing a cognitive task - like reading and summarising an article - to genAI doing the same thing?
I suspect the latter would be guesses because these companies are deliberately trying to hide the true costs. But my guess is we will find at least an order of magnitude difference.
FWIW, here's my draft policy on bullshit generators like #ChatGPT in my classes. Any thoughts?
Stochastic Parrots 🦜
It isn't my style to forbid you from using technology in your learning, but if at any point you consider using generative tools, please pause for a moment.
Remember that these tools do not possess any meaningful form of intelligence. They are statistical generators of likely word sequence, not writers of meaningful content. 1/2
“What we are going to see, in the fullness of time, I promise you, is that #Gemini is more or less in the same ball park as #GPT4, handy for a bunch of things, but untethered in reality, still with dicey, unpredictable reasoning, and a very limited understanding of the world. Don’t let the PR fool you”
Why to Use Local Models