kellogh, to LLMs
@kellogh@hachyderm.io avatar

it’s a little disingenuous to refer to #LLMs as #opensource because you can really only open source an LLM in roughly the same way you open source a microprocessor — RISCV is open source, the plans for it anyway, but it still costs millions to riff off it and make your own custom version, same with LLMs. that’s not exactly what open source was going for

unfa, to ai
@unfa@mastodon.social avatar

Okay, now this is important. Drop everything you're doing, because Tom7 has released a new video. And oh boy, is it wonderful. 20 minutes won't save your life - and this video demands you sacrifice that time. Now!

https://www.youtube.com/watch?v=Y65FRxE7uMc

kellogh, to random
@kellogh@hachyderm.io avatar

it’ll be fascinating to read a hindsight analysis of the debacle in 5 years. i think it represents a significant business failure, maybe a critical one. i know there’s a lot of undercurrents and dynamics at play, it’ll be a great case study down the road

kellogh,
@kellogh@hachyderm.io avatar

i love and all, but has always had a more difficult security profile and LLMs even more so. lots of subtle issues, and conflict with business opportunities

but that means that if you want to “invest heavily in ”, you HAVE TO also invest heavily in security. they go hand in hand.

if is showing cracks in microsoft’s security culture, you can probably use that information to make predictions about their long term health

happyborg, to LLMs
@happyborg@fosstodon.org avatar

#LLMs =
Large
Lamentable
Mishaps

metin, (edited ) to microsoft
@metin@graphics.social avatar
kellogh, to llm
@kellogh@hachyderm.io avatar

the energy cost of training an is about the same as the energy required to raise 2 kids. But unlike kids, you can copy and amortize that energy cost across inferences

https://cacm.acm.org/blogcacm/the-energy-footprint-of-humans-and-large-language-models/

metin, to animation
@metin@graphics.social avatar
gmusser, to Neuroscience

At a detailed level, artificial neural networks look very different from natural brains. At a higher level, they are uncannily similar. My story for @thetransmitter, edited by @kristin_ozelli, features @KathaDobs @ev_fedorenko @lampinen @Neurograce and others. https://www.thetransmitter.org/neural-networks/can-an-emerging-field-called-neural-systems-understanding-explain-the- #neuroscience #AI #LLMs

MolemanPeter, to LLMs

are based on language. Do we think the basis of truth is in language?

kellogh, to LLMs
@kellogh@hachyderm.io avatar

one thing i love about #LLMs is asking it “how tf do i do X” and it responds with 5 ideas, four of which are terrible but one is far better than anything i’d thought of. or their all terrible but one makes me realize i’ve been thinking about the problem wrong

kellogh,
@kellogh@hachyderm.io avatar

also, #LLMs cause me to think a lot about the multifaceted nature of intelligence. we used to over-weight language skill, but now that LLMs have that in spades, it’s apparent that there’s more going on

for example, spontaneity. if it had an ounce of sponteity, it could suggest approaching the problem differently

timbray, to LLMs
@timbray@cosocial.ca avatar

I see that openai.com/gptbot is crawling my blog, top to bottom, side to side. I’m sure OpenAI has consulted the “Rights” link clearly displayed on every page, invoking a Creative Commons license that freely grants rights to reuse and remix but not for commercial purposes.

#genAI #llms

mamund, to LLMs
@mamund@mastodon.social avatar

Evaluating Large Language Models Using “Counterfactual Tasks”

https://aiguide.substack.com/p/evaluating-large-language-models?utm_source=post-email-title&publication_id=1273940&post_id=144603950&utm_campaign=email-post-title&isFreemail=true&r=4pxfn&triedRedirect=true&utm_medium=email

"In [the counterfactual task] paradigm, models are evaluated on pairs of tasks that require the same types of abstraction and reasoning, but for each pair, the content of the first task is likely to be similar to training data, whereas the content of the second task (a “counterfactual task”) is designed to be unlikely to be similar to training data." -- #MelanieMitchell

#genAI #LLMs

elizayer, to random
@elizayer@mastodon.social avatar

Oooooof. @baldur's take on the Jobs to be Done of modern software development is brutal.

The conclusion is pretty sobering too: that LLMs will become embedded in software development because they truly deliver on this promise of churn, and do it at lower cost than software developers.

https://www.baldurbjarnason.com/2024/the-one-about-the-web-developer-job-market/

happyborg,
@happyborg@fosstodon.org avatar

@elizayer
I remain hopeful. Products ultimately have to work and when they don't those behind them lose.

Reminds me of Jobs and the Apple Newton, the doomed forerunner to the Apple iPod and ultimately the iPhone. All from Jobs' vision and drive, one an utter failure, like the car 🤷‍♂️. I can hear someone saying why didn't we just put #LLMs in it... 🤦‍♂️

#p2p can change this, or at least make a dent as big as #FOSS. Together? 🥳
@jimkreft

moorejh, to LLMs
@moorejh@mastodon.online avatar

Our KRAGEN paper is out! This method combines LLMs & RAG with Graph of Thoughts for asking complex questions of a knowledge graph or any vector DB https://academic.oup.com/bioinformatics/advance-article/doi/10.1093/bioinformatics/btae353/7687047 #llms #artificialintelligence #bioinformatics #datascience

AlexJimenez, to ai
@AlexJimenez@mas.to avatar

Inside Anthropic, the #AI Company Betting That Safety Can Be a Winning #Strategy

https://time.com/6980000/anthropic/

#DigitalTransformation #LLMs #GenerativeAI

underdarkGIS, (edited ) to random
@underdarkGIS@fosstodon.org avatar

Data Analyst vs movement data

Today, I took ChatGPT's Data Analyst for a spin. You've probably seen the fancy advertising videos: just drop in a dataset and AI does all the analysis for you?! Let's see ...

http://anitagraser.com/2024/05/30/chatgpt-data-analyst-vs-movement-data/

metin, to ai
@metin@graphics.social avatar

𝘝𝘦𝘳𝘺 𝘍𝘦𝘸 𝘗𝘦𝘰𝘱𝘭𝘦 𝘈𝘳𝘦 𝘜𝘴𝘪𝘯𝘨 '𝘔𝘶𝘤𝘩 𝘏𝘺𝘱𝘦𝘥' 𝘈𝘐 𝘗𝘳𝘰𝘥𝘶𝘤𝘵𝘴 𝘓𝘪𝘬𝘦 𝘊𝘩𝘢𝘵𝘎𝘗𝘛, 𝘚𝘶𝘳𝘷𝘦𝘺 𝘍𝘪𝘯𝘥𝘴

https://slashdot.org/story/24/05/30/0238230/very-few-people-are-using-much-hyped-ai-products-like-chatgpt-survey-finds

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ContentModeration #LLMs #AIRegulation: "Drawing on the extensive history of study of the terms and conditions (T&C) and privacy policies of social media companies, this paper reports the results of pilot empirical work conducted in January-March 2023, in which T&C were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc), small and large sizes, and varying countries of origin. Although the study looked at terms relating to a wide range of issues including content restrictions and moderation, dispute resolution and consumer liability, the focus here is on copyright and data protection. Our early findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradistinction to their function as content generators rather than mere hosts for third party content. This study concludes that in light of these findings, new laws being drafted to rein in the power of “big tech” must be reconsidered carefully, if the imbalance of power between users and platforms in the social media era, only now being combatted, is not to be repeated via the private ordering of the providers of generative AI."

https://www.create.ac.uk/blog/2024/05/29/new-working-paper-private-ordering-and-generative-ai-what-can-we-learn-from-model-terms-and-conditions/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Large language models such as ChatGPT are some of the most energy-guzzling technologies of all. Research suggests, for instance, that about 700,000 litres of water could have been used to cool the machines that trained ChatGPT-3 at Microsoft’s data facilities. It is hardly news that the tech bubble’s self-glorification has obscured the uglier sides of this industry, from its proclivity for tax avoidance to its invasion of privacy and exploitation of our attention span. The industry’s environmental impact is a key issue, yet the companies that produce such models have stayed remarkably quiet about the amount of energy they consume – probably because they don’t want to spark our concern.

Google’s global datacentre and Meta’s ambitious plans for a new AI Research SuperCluster (RSC) further underscore the industry’s energy-intensive nature, raising concerns that these facilities could significantly increase energy consumption. Additionally, as these companies aim to reduce their reliance on fossil fuels, they may opt to base their datacentres in regions with cheaper electricity, such as the southern US, potentially exacerbating water consumption issues in drier parts of the world. Before making big announcements, tech companies should be transparent about the resource use required for their expansion plans."

https://www.theguardian.com/commentisfree/article/2024/may/30/ugly-truth-ai-chatgpt-guzzling-resources-environment?CMP=fb_a-technology_b-gdntech

ai6yr, to ai
@ai6yr@m.ai6yr.org avatar

NY Times: Once a Sheriff’s Deputy in Florida, Now a Source of Disinformation From Russia https://www.nytimes.com/2024/05/29/business/mark-dougan-russia-disinformation.html

AlexJimenez, to ai
@AlexJimenez@mas.to avatar
happyborg, to LLMs
@happyborg@fosstodon.org avatar

If #LLMs aren't the best way to get facts, what are they best for?

I'll suggest deceit, manipulation, disruption, fobbing people off, and most important of all, profitssiz.
#AI

stefan, to ai
@stefan@stefanbohacek.online avatar

Good news for folks who enjoy AI embarrassing itself with nonsensical answers!

"Well, according to an interview at The Verge with Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem.""

https://futurism.com/the-byte/ceo-google-ai-hallucinations

markhughes,
@markhughes@mastodon.social avatar

@stefan
They are errors, not hallucinations.
And errors are bugs not a 'feature' of .

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Verge with Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

So expect more of these weird and incredibly wrong snafus from AI Overviews despite efforts by Google engineers to fix them, such as this big whopper: 13 American presidents graduated from University of Wisconsin-Madison. (Hint: this is so not true.)

But Pichai seems to downplay the errors.

"There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time," he said. "I think that would be the wrong way to think about it.""
https://futurism.com/the-byte/ceo-google-ai-hallucinations

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines