PavelASamsonov, to random
@PavelASamsonov@mastodon.social avatar

The true power of is not technological, but rhetorical: almost all conversations about it are about what executives are saying it will do "one day" or "soon" rather than what we actually see (and of course no mention of business model which doesn't exist).

We are told to simultaneously believe AI is so "early days" as to excuse any lack of real usefulness, and that it is so established - even "too big to fail" - that we are not permitted to imagine a future without it.

mikarv, to Futurology
@mikarv@someone.elses.computer avatar

Meta's #Llama 2 license has an unusual clause whereby they withdraw your right to use the model if you allege #Meta has breached your own IP rights by training their stuff on your intellectual property. #copyright #genai #LLama2

afeinman, to random
@afeinman@wandering.shop avatar

HOW TO SPOT A DEEP FAKE:

  1. You can't.

Don't think you can. You can spot clumsy ones, but you've already missed a dozen others. We're past the stage where even expert practitioners can have a 100% success rate.

Instead, think about how to avoid taking action, or trusting someone, because of who they seem to be. Holding onto the fantasy that "I can spot 'em!" is harmful, and moves the onus of responsibility from collective to personal.

This is also true for #genAI, of course.

matthewskelton, to llm
@matthewskelton@mastodon.social avatar

"the real-world use case for large language models is overwhelmingly to generate content for spamming"

Excellent article by Amy Castor

#GenAI #LLM #Crypto #Scam

https://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/

cassidy, to ai
@cassidy@blaede.family avatar

“AI” as currently hyped is giant billion dollar companies blatantly stealing content, disregarding licenses, deceiving about capabilities, and burning the planet in the process.

It is the largest theft of intellectual property in the history of humankind, and these companies are knowingly and willing ignoring the licenses, terms of service, and laws that us lowly individuals are beholden to.

https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m

#AI #GenAI #LLM #LLMs #OpenAI #ChatGPT #GPT #GPT4 #Sora #Gemini

filipw, to ai
@filipw@mathstodon.xyz avatar

great article - AI Prompt Engineering Is Dead.

sounds like the much heralded job of the future, "prompt engineer" is no longer needed 😅

"Battle and his collaborators found that in almost every case, this automatically [AI generated] generated prompt did better than the best prompt found through trial-and-error. And, the process was much faster, a couple of hours rather than several days of searching."

🔗 https://spectrum.ieee.org/prompt-engineering-is-dead
#ai #genai #machinelearning

timbray, to cryptocurrency
@timbray@cosocial.ca avatar

It’s nauseating that the hyperscalers are crankin’ the carbon to inflate the AI bubble like there’s no tomorrow (which there won’t be, for my children, if we don’t cut back) but hey, don’t forget that Bitcoin is still in the running for the single most dangerous-to-the-planet use of computers.

https://www.theverge.com/2024/5/15/24157496/microsoft-ai-carbon-footprint-greenhouse-gas-emissions-grow-climate-pledge

#cryptocurrency #genai

abucci, to midjourney
@abucci@buc.ci avatar

Nightshade 1.0 is out: https://nightshade.cs.uchicago.edu/index.html

From their "What is Nightshade?" page:

Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.

In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.

-E

smach, to ai
@smach@fosstodon.org avatar

Generative AI bias can be substantially worse than in society at large. One example: “Women made up a tiny fraction of the images generated for the keyword ‘judge’ — about 3% — when in reality 34% of US judges are women . . . .In the Stable Diffusion results, women were not only underrepresented in high-paying occupations, they were also overrepresented in low-paying ones.”
#AI #GenAI #GenerativeAI #LLM #LLMs
https://www.bloomberg.com/graphics/2023-generative-ai-bias/

tante, (edited ) to ai
@tante@tldr.nettime.org avatar

The growing backlash against AI

While the crowd at sxsw2024 booing a sizzle reel of people either promising the beauty of the future "AI" will bring or claiming it to be "without alternative" is funny and went viral for all the right reasons, this event speaks to a deeper shift in perception. #ai #genAI #luddism

https://tante.cc/2024/03/18/5115/

btravern, to mtg
@btravern@dice.camp avatar

To no surprise, /Hasbro has gone back on their word about not using in or .

CatherineFlick, to LLMs
@CatherineFlick@mastodon.me.uk avatar

Just FYI, if you have older parents or other family members, set up some sort of shibboleth with them so they know what to ask you if you ever call them asking for something. These new generative models are going to be extremely convincing, and the idiots in charge of these companies think they can use guardrails to stop it being used inappropriately. They can't. #genAI #LLMs #chatgpt

horovits, to ai
@horovits@fosstodon.org avatar

took out the fun part of , the creation, leaving us to debug and test auto generated code. Not fun 😕

And it seems our software has also become worse since the era.

@kevlin keynote at sharing developer research and thoughts.

jonippolito, to Cybersecurity
@jonippolito@digipres.club avatar

A cybersecurity researcher finds that 20% of software packages recommended by GPT-4 are fake, so he builds one that 15,000 code bases already depend on, to prevent some hacker from writing a malware version.

Disaster averted in this case, but there aren't enough fingers to plug all the AI-generated holes 😬

https://it.slashdot.org/story/24/03/30/1744209/ai-hallucinated-a-dependency-so-a-cybersecurity-researcher-built-it-as-proof-of-concept-malware

ppatel, to LLMs
@ppatel@mstdn.social avatar

One wonders how effective translations are when done by #LLMs since the corpus of material used to train languages is this crap. Do we have a #GIGO
problem?

Research Suggests A Large Proportion Of Web Material In Languages Other Than English Is Machine Translations Of Poor Quality Texts.

https://www.techdirt.com/2024/01/29/research-suggests-a-large-proportion-of-web-material-in-languages-other-than-english-is-machine-translations-of-poor-quality-texts/

#GenAI #AI

jon, to random
@jon@henshaw.social avatar

Tell me you used #GenAI without telling me you used generative AI.

fenneladon, to random

Every app and site should have a "never show me synthetic content" default and option. Instead every app and site is trying to force us to prioritise their low quality unreliable literal-in-the-philosophical-sense-bullshit synthetic content. As a product manager, I'm beyond embarrassed for everyone involved in these actively user-, society-, and environment- hostile choices. 🙄 #GenAI

ppatel, to opensource
@ppatel@mstdn.social avatar

I expected something like this after Apple's October #OpenSource AI effort. The potential #accessibility implications are pretty significant here.

Apple partners with University of California researchers to release open-source #AI model #MGIE, which can edit images based on natural language instructions

Apple releases ‘MGIE’, a revolutionary AI model for instruction-based image editing

https://venturebeat.com/ai/apple-releases-mgie-a-revolutionary-ai-model-for-instruction-based-image-editing/

#GenAI #photos #photography

mikarv, to random
@mikarv@someone.elses.computer avatar

While firms are being faced with criticism for their models containing detailed personal data about individuals, and thinking of how to mitigate this, UK intelligence agencies (e.g. ) are seeking powers to effectively lower their oversight in building models precisely to be able to draw unstructured data into personal data about people, claiming that AI firms can do it (whether or not legally!) but they can't https://www.theguardian.com/technology/2023/aug/01/uk-intelligence-spy-agencies-relax-burdensome-laws-ai-data-bpds?CMP=share_btn_tw

shortridge, to Cybersecurity
@shortridge@hachyderm.io avatar

The 2024 Verizon Data Breach Investigations Report (#DBIR) is out this morning, and I make sense of it in my new post: https://kellyshortridge.com/blog/posts/shortridge-makes-sense-of-verizon-dbir-2024/

I focused on what felt like the most notable points, from #ransomware to MOVEit to web app pwnage to #GenAI and more.

I have insights, quibbles, and hot takes as always — but the fact remains it’s our best source of empirical data on cyberattack impacts. If you’re a #cybersecurity vendor, please consider contributing data to it.

OmaymaS, to ai
@OmaymaS@dair-community.social avatar

I feel dizzy, sick and bored at the AI discourse.

We keep hearing the same bullshit.

We keep seeing new variations of the same flawed products.

We keep reading papers that state the obvious.

We keep pushing back the nonsense.

We keep seeing people cheering for the same nonsense.

We keep being pushed to embrace that nonsense.

🤕

judeswae, to OpenAI
@judeswae@toot.thoughtworks.com avatar

"I believe that artificial intelligence has three quarters to prove itself before the apocalypse comes, and when it does, it will be that much worse, savaging the revenues of the biggest companies in tech.", predicts Ed Zitron.

https://www.wheresyoured.at/peakai/

ceedee666, to OpenAI German

@noybeu sues #OpenAI for spreading false information.

https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

This is going to be interesting as it’s about the very foundation of #GenAI.

DrFerrous, to random
@DrFerrous@hachyderm.io avatar

As educators and scientists, we can and should communicate clearly that generative AI tools are not sentient, have no capacity for truth, and are merely complex statistical algorithms dressed up in a plain language outfit.

timbray, to LLMs
@timbray@cosocial.ca avatar

I see that openai.com/gptbot is crawling my blog, top to bottom, side to side. I’m sure OpenAI has consulted the “Rights” link clearly displayed on every page, invoking a Creative Commons license that freely grants rights to reuse and remix but not for commercial purposes.

#genAI #llms

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • GTA5RPClips
  • ethstaker
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • tacticalgear
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • lostlight
  • All magazines