ajsadauskas, to ai
@ajsadauskas@aus.social avatar

In five years time, some CTO will review the mysterious outage or technical debt in their organisation.

They will unearth a mess of poorly written, poorly -documented, barely-functioning code their staff don't understand.

They will conclude that they did not actually save money by replacing human developers with LLMs.

#AI #LLM #LargeLanguageModels #WebDev #Coding #Tech #Technology @technology

aral, to ai
@aral@mastodon.ar.al avatar

We call it AI because no one would take us seriously if we called it matrix multiplication seeded with a bunch of initial values we pulled out of our asses and run on as much shitty data as we can get our grubby little paws on.

#AI #ArtificalIntelligence #MachineLearning #LLM #LargeLanguageModels

hosford42, to llm
@hosford42@techhub.social avatar

I am really, really, REALLY irritated by what I just saw. The function of Microsoft's is outright lying to people with vision impairments about what appears in images it receives. It's bad enough when an is allowed to tell lies that a person can easily check for veracity themselves. But how the hell are you going to offer this so-called service to someone who can't check the claims being made and NEEDS those claims to be correct?

How long till someone gets poisoned because Bing lied and told someone it was food that hasn't expired when it has, or that it's safe to drink when it's cleaning solution, or God knows what? This is downright irresponsible and dangerous. either needs to put VERY CLEAR disclaimers on their service, or just take it down until it can actually be trusted.









simon_brooke, to llm
@simon_brooke@mastodon.scot avatar

I've been doing a bit more experimenting with and truth, and I've got an interesting one.

my experimental design was that I'd start asking about relationships between European monarchs, and then start introducing fictitious monarchs, but I didn't get that far...

#1/several

aral, to ArtificialIntelligence
@aral@mastodon.ar.al avatar

Fake Intelligence is where we try to simulate intelligence by feeding huge amounts of dubious information to algorithms we don’t fully understand to create approximations of human behaviour where the safeguards that moderate the real thing provided by family, community, culture, personal responsibility, reputation, and ethics are replaced by norms that satisfy the profit motive of corporate entities.

scottmiller42, to random
@scottmiller42@mstdn.social avatar

An idea I've been kicking around the last couple weeks is the need for some kind of tag to use on digital content (writing, artwork, social media profiles, etc.) to specifically prohibit use as training data by Large Language Models. The robots.txt convention is along the lines of what I'm thinking.

Yes, this would be voluntary and self-policed. Yes, I realize that many people who build LLMs will disregard the tags. It may not have any impact initially.

1/x

#LargeLanguageModels

bornach, to generativeAI
@bornach@masto.ai avatar

I asked #BingChat (creative) to write a nursery rhyme about a billionaire visiting the Titanic wreck in his submersible

The chatbot generated something a bit dark

Notice that I made no mention of #Titan, #StocktonRush, or #OceanGate and it doesn't look like it did a specific Internet search for the prompt

#generativeAI #LargeLanguageModels

mjgardner, to ChatGPT

“Users speak of as ‘hallucinating’ wrong answers — make stuff up and present it as fact when they don’t know the answer. But any answers that happen to be correct were ‘hallucinated’ in the same way.” — @davidgerard, https://davidgerard.co.uk/blockchain/2023/06/03/crypto-collapse-get-in-loser-were-pivoting-to-ai/

” “

persagen, to llm
@persagen@mastodon.social avatar

Establishing Trust in ChatGPT Biomedical Generated Text
Ontology-Based Knowledge Graph to Validate Disease-Symptom Links
https://arxiv.org/abs/2308.03929

  • goal: distinguish factual information f. unverified data
  • 1 dataset f. PubMed; vs. ChatGPT simulated articles (AI-generated content)
  • striking number of links among terms in ChatGPT KG, surpassing some of those in PubMed KG
  • see image caption for add. detail

#LLM #LargeLanguageModels #ChatGPT #KnowledgeGraph #SyntheticData #PubMed #biomedical

wagesj45, to ai
@wagesj45@mastodon.jordanwages.com avatar

i think something people don't understand about models like is that they're fixed. they're deterministic. the same input results in the same output. in the whole discorse recently, people talk like the ai has some agency; that you're just "telling it what to do". only in the same way you tell photoshop what to do. it's just the type of input is different. they're complex. they're not magic. there's no ghost in the machine (yet).

bornach, to ChatGPT
@bornach@fosstodon.org avatar

Thanks to and it is now possible for [attoparsec] to make a real Talkie Toaster from Red Dwarf.
And yes it is just as insufferable to talk to.
https://youtu.be/HERkwJ0OIMo

ACM, (edited ) to opensource
@ACM@mastodon.acm.org avatar

More than 170,000 titles, the majority of them published within the last two decades, were fed into models run by companies including Meta and Bloomberg, according to an analysis of "Books3" - the dataset harnessed by the firms to build their AI tools.

Should copyrighted work be used by #opensource platforms to train #AI models?

Source: The Guardian (https://www.theguardian.com/books/2023/aug/22/zadie-smith-stephen-king-and-rachel-cusks-pirated-works-used-to-train-ai)

#generativeai #largelanguagemodels #polloftheweek #ACM

Sevoris, to random

Crossing point observation of the day:

  • On one hand, we have new papers that show how just using the language of a specific human group can trigger implicit, hidden biases in

  • on the other hand, we have software developers working to build tools that automatically retrieve information that may be of interest, and that try to reason ahead on your interests. Highest point so far: https://new.computer/

1

wagesj45, to ai
@wagesj45@mastodon.jordanwages.com avatar

Want to know what the absolute best implementation of and is right now? Hands down its Spotify's AI DJ. Why is it the best?

  1. It's cool.
  2. Low stakes. No misinformation, just fun personalization.
AccordionGuy, to ai
@AccordionGuy@mastodon.cloud avatar

Do you REALLY want to get a feel for how GPT-4o does what it does? Just complete this poem — by doing so, you’ll have performed a computation similar to the one it does when you feed it a text-plus-image prompt.

#AI #ArtificialIntelligence #LLM #LLMs #LargeLanguageModel #LargeLanguageModels

https://www.globalnerdy.com/2024/05/15/the-simplest-way-to-illustrate-how-gpt-4o-works/

doctorambient, to LLMs
@doctorambient@mastodon.social avatar

People: stop asking to explain their behavior.

We already know that LLMs don't have the introspection necessary to explain their behavior, and their explanations are often fanciful or "just wrong."

For instance, Gemini claims it reads your emails for training, Google says it doesn't.

(BTW, if it turns out Gemini is right and Google is lying, that might be another example of an LLM convincing me it's actually "intelligent.")

transponderings, to ChatGPT

It’s increasingly clear that Alan Turing’s ‘imitation game’ – usually known as ‘the Turing test’ – tells us nothing about whether machines can think

Instead it demonstrates how readily people can be taken in by complete and utter nonsense if it has the superficial form of an authoritative text

#ImitationGame #TuringTest #LargeLanguageModels #ChatGPT #ArtificialIntelligence

wagesj45, to internet
@wagesj45@mastodon.jordanwages.com avatar

> v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).

way to ruin a good thing,

mjgardner, to ChatGPT

Anyone who tells you that like can think or reason or are stepping stones to true is either trying to sell you something or trying to recover the sunk cost of buying it from others.

https://toot.cafe/@baldur/111114236030617696

persagen, to llm
@persagen@mastodon.social avatar

AI language models are rife with political biases
https://www.technologyreview.com/2023/08/07/1077324/ai-language-models-are-rife-with-political-biases/

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
https://aclanthology.org/2023.acl-long.656.pdf

  • LM pretrained on news, discussion forums, books, online encyclopedia ...
  • this data includes opinions & perspectives which both
    (i) celebrate democracy and diversity of ideas
    (ii) are inherently socially biased

sleepytako, to random
@sleepytako@famichiki.jp avatar

"Very broadly speaking: the Effective Altruists are doomers, who believe that (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race."
Quote @pluralistic

"Spicy autocomplete" is sooooo the best way to best describe what "AI" actually is. Nothing is intelligent, just artificial.

MisuseCase, to generativeAI
@MisuseCase@twit.social avatar

It also depends on what you want the and/or to do, and if you care to put in the time, effort, and investment to curate the training data or not.

Many of these operations don’t want to do the work in terms of curating their training data (whether that means screening it or asking for permission or whatever) because it’s not cheap or fast!

/1 https://ourislandgeorgia.net/@Wolven/111721354763217828

simon_brooke, to Futurology
@simon_brooke@mastodon.scot avatar

"This is undesirable for scientific tasks which value truth" ( 'scientists' writing about ).

So presumably it's fine in scientific tasks that DON'T value truth?

I wonder where you'd find people who work on scientific tasks and don't value truth? Oh, of course. In Meta's AI team, of course.

H/t @emilymbender

https://pca.st/episode/306b9fc4-aa02-4af1-8193-80a8abb1c268

greg, to llm
@greg@clar.ke avatar

Does anyone have a good list of logical questions to judge large language models ability to reason?

Questions like "if it takes 3 hours for 3 towels to dry, how long does it take for 9 towels to dry?"

I'm playing around with Mistrals leaked 70b Miqu LLM and want to test it's reasoning skills for a project I'm working on. I've been really impressed so far. It's slower than Mistral & Mixtral but it's been producing the best reasoned answers I've seen from an LLM. And it's running locally!

#LLM #LLMs #Mistral #Miqu #LargeLanguageModels #GPT #ChatGPT

lightweight, to LLMs
@lightweight@social.fossdle.org avatar

If your educational institution is still using , especially in light of their policy change to use/sell your content to train (), it's doing the wrong thing. Digitally literate institutions (a rare & precious thing) already use () which is & substantially better for educational applications. If you want to trial it, talk to us - we've been making our instances available for institutions to use since Covid: https://oer4covid.oeru.org

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines