remixtures, (edited ) to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI / #GenerativeAI is so full of bullshit!! A few days ago, I found this web page containing 206 lists of Top 10 best TV shows provided to the BBC by film and television journalists all over the world: https://www.bbc.com/culture/article/20211014-the-greatest-tv-series-of-the-21st-century-who-voted. When I asked a few LLMs to generate a ranking of TV shows appearing in that page based on the number of occurrences,hen I asked a few LLMs to generate a ranking of TV shows appearing in that page based on the number of occurrences, they all started to hallucinate.

Besides the position and title of each TV show, I wanted the LLM to include the number of occurrences of the given show on those lists appearing on that web page. I even copied and pasted all those lists to the input text box provided by chat[dot]lmsys[dot]org. That approach also didn't work because the text box wasn't big enough to hold all those lists.

To sum it all: wake me up when Claude 3 Opus, ChatGPT 4o or any other LLM is able to read the contents of a web page and generate a list of occurrences based on numbered lists included in that page. Tasks like these are on paper very appropriate for LLMs to handle: close-ended, very straightforward jobs. Yet they are still completely unreliable. So, sure, of course we're experiencing a hype built upon overblown expectations.

Nevertheless, I would like to award an honorable mention to Claude 3 Opus, because it helped me to create a Python script that managed to correctly scrap all the 206 lists from this page and create a a ranking of occurrences from it. It took me a while to explain to him the structure used by the BBC in that page but it finally got it. It even guessed that I missed the li tag - it was inside double quotes in a previous prompt and, because of that, it was automatically removed. In the screenshots included above are the winning prompt, the code excerpt suggested by Claude 3, and the first 38 Top TV shows.

stefan, to tech
@stefan@stefanbohacek.online avatar

"The generative AI boom has eroded trust between creatives and Silicon Valley. [...] it’s time for tech companies to stop screwing around for their own benefit, listen to the users who pay them, and act in a transparent way."

https://www.fastcompany.com/91137832/creatives-are-right-to-be-fed-up-with-adobe-and-every-other-tech-company-right-now

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Search #Perplexity #Plagiarism #Journalism #Media #News: "AI-powered search startup Perplexity appears to be plagiarizing journalists’ work through its newly launched feature, Perplexity Pages, which lets people curate content on a particular topic. Multiple posts that have been “curated” by the Perplexity team on its platform are strikingly similar to original stories from multiple publications, including Forbes, CNBC and Bloomberg. The posts, which have already gathered tens of thousands of views, do not mention the publications by name in the article text — the only attributions are small, easy-to-miss logos that link out to them.

For instance, a Perplexity aggregation of Forbes’ exclusive reporting on Eric Schmidt’s stealth drone project contains several fragments that appear to have been lifted, including a custom illustration. Over the past several months, Forbes has broken a series of stories on the former Google CEO’s secretive efforts to develop AI-guided aircraft for the battlefield, and this week reported that Schmidt had poached talent from SpaceX, Apple and Google, and has been testing his drones in the wealthy Silicon Valley town of Menlo Park." https://www.forbes.com/sites/sarahemerson/2024/06/07/buzzy-ai-search-engine-perplexity-is-directly-ripping-off-content-from-news-outlets/

jeffowski, to ai
@jeffowski@mastodon.world avatar
Nonilex, to ArtificialIntelligence
@Nonilex@masto.ai avatar

The federal #government is facing a dwindling window to #regulate the use of #ArtificialIntelligence in campaigns before the #2024election. The #FCC chair announced a plan last month to require #politicians to disclose #AI use in TV & radio #ads. But the prop is facing opposition from a #FEC top ofcl, which has been considering its own new rules on AI campaign use.

#law #tech #disinformation #InfluenceCampaign #generativeAI #regulation #Congress
https://www.washingtonpost.com/technology/2024/06/06/ai-election-2024-us-misinformation-regulation/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "When it comes to AI, the best defence is not to simply wrap ourselves in a protective legislative cocoon and demand another tough new law to preempt or repel every risk or act of harm.

Rather, it is about determining who has the power.

If we are going to embrace AI, let’s do so as active participants, not passive subjects. Let’s embed the notion of shared benefits with strong industrial guardrails. Let’s get AI out of the IT department and onto the shop floor. And let’s demand those driving the introduction of this technology do so with us, not to us; shaped by us, not shaping us; augmenting our labour, not automating it.

The lesson of the social media revolution has been that technology is neither innately good nor bad. What seemed like a positive tool to connect people on an open platform has become a threat to our collective wellbeing because of the underlying business model.

Approaching AI with this critical mindset, rather than naively embracing progress as a self-evident good, is the first step.

Thanks to scholars like Acemoglu and Johnson, we now have an economic argument to match the moral one: the adaptation of new technology can make us all richer and happier if we are given the chance to collectively design it and control it."

https://www.theguardian.com/australia-news/commentisfree/article/2024/jun/04/scarlett-johansson-wont-save-us-from-ai-but-if-workers-have-their-say-it-could-benefit-us-all

heimspielTV, to generativeAI German
@heimspielTV@augsburg.social avatar

CAI klärt jetzt auch auf Twitch über Cannabis auf. Er kennt sich mit dem Gesetz aus, kennt Risiken beim Umgang mit THC und informiert über Hanf im Allgemeinen und über verschiedene Sorten. Wusstest du dass CBD-haltiges Cannabis ohne THC keine psychischen Effekte hat, entspannend wirkt und beim einschlafen helfen kann?

https://twitch.tv/heimspieltv

BenjaminHan, to llm
@BenjaminHan@sigmoid.social avatar

1/

With applications more abundant, have researchers been using them to assist their writing? We know they have when writing peer reviews [1], but how about doing so in writing their published papers?

Liang et al comes back to answer this question in [3]. They applied the same corpus-based methodology proposed in [2] on 950k papers published between 2020 to 2024, and the answer is a resounding YES, esp. in CS (up to 17.5%) (screenshot 1).

chetwisniewski, (edited ) to ai
@chetwisniewski@securitycafe.ca avatar

I don't think we give Meta, Google and OpenAI enough credit for their AI LLM accomplishments. I mean, who would have imagined we could spend billions of dollars and warmed the planet a few degrees all to teach computers to not be able to do math. It really is an astonishing achievement.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "VCs are clamoring to invest in hot AI companies, willing to pay exorbitant share prices for coveted spots on their cap tables. Even so, most aren’t able to get into such deals at all. Yet, small, unknown investors, including family offices and high-net-worth individuals, have found their own way to get shares of the hottest private startups like Anthropic, Groq, OpenAI, Perplexity, and Elon Musk’s X.ai (the makers of Grok).

They are using special purpose vehicles, or SPVs, where multiple parties pool their money to share an allocation of a single company. SPVs are generally formed by investors who have direct access to the shares of these startups and then turn around and sell a part of their allocation to external backers, often charging significant fees while retaining some profit share (known as carry).

While SPVs aren’t new – smaller investors have relied on them for years – there’s a growing trend of SPVs successfully getting shares from the biggest names in AI.

These investors are finding that the most popular AI companies, except OpenAI, are not all that hard for them to buy at their smaller levels of investing."

https://techcrunch.com/2024/06/01/vcs-are-selling-shares-of-hot-ai-companies-like-anthropic-and-xai-to-small-investors-in-a-wild-spv-market

AlexJimenez, to ai
@AlexJimenez@mas.to avatar

Inside Anthropic, the #AI Company Betting That Safety Can Be a Winning #Strategy

https://time.com/6980000/anthropic/

#DigitalTransformation #LLMs #GenerativeAI

adamsnotes, to generativeAI
@adamsnotes@me.dm avatar

AI clones of people are becoming a lot more common and a lot more worrying.

This one was taken down after Ali jumped through a bunch of hoops to prove her identity, but CivitAI currently only removes models if there is a complaint - they have no policy against creating models to impersonate real people.

--
What It’s Like Finding Your Nonconsensual AI Clone Online
https://www.404media.co/what-its-like-finding-your-nonconsensual-ai-clone-online/

#Deepfakes #GenerativeAI #CivitAI #404media

dalfen, to ai
@dalfen@mstdn.social avatar

Imagine— It might all be just a fad.


https://www.bbc.com/news/articles/c511x4g7x7jo

AlexJimenez, to ai
@AlexJimenez@mas.to avatar
denis, to generativeAI
@denis@ruby.social avatar

Literally every single thing ChatGPT tells me is provably wrong.

Generative AI is a fucking train wreck.

crafty_crow, to ai
@crafty_crow@mastodon.sdf.org avatar

If AI tech bros are going to steal content for their generative AI, perhaps poisoning the well with mislabeled images, incorrect responses, and injecting instructions in content is well within our rights to fight back.

unevil_cat, to StableDiffusion German
@unevil_cat@mastodon.social avatar
aby, to tech
@aby@aus.social avatar

“Most people are not aware of the resource usage underlying ChatGPT,” Ren said. “If you’re not aware of the resource usage, then there’s no way that we can help conserve the resources.”

In July 2022, the month before OpenAI says it completed its training of GPT-4, Microsoft pumped in about 11.5 million gallons of water to its cluster of Iowa data centers, according to the West Des Moines Water Works. That amounted to about 6% of all the water used in the district, which also supplies drinking water to the city’s residents.

#tech #technology #AI #generativeAI #ChatGPT #microsoft #ClimateCrisis

https://apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumption-microsoft-f551fde98083d17a7e8d904f8be822c4?fbclid=IwZXh0bgNhZW0CMTEAAR3RSpm6xHK11bscxSH0LOYa_u0NzVqm82Q6rYJ6wY9I3CEHNJjy3AGXkYs_aem_AZajQUCRmv2g52SCEwjpSTEV1O3wZE25xpNndxjJRG0H3JKJBG-abCQJA12X_owD3rmSmRXu4wOOfUmjLs5KJzKf

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about. A separation letter on the termination documents, which you can read embedded below, says in plain language, “If you have any vested Units ... you are required to sign a release of claims agreement within 60 days in order to retain such Units.” It is signed by Kwon, along with OpenAI VP of people Diane Yoon (who departed OpenAI recently). The secret ultra-restrictive NDA, signed for only the “consideration” of already vested equity, is signed by COO Brad Lightcap.

Meanwhile, according to documents provided to Vox by ex-employees, the incorporation documents for the holding company that handles equity in OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.

Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI."

https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #Film #Movies #Her: "Now, I do see why Altman likes it so much; besides its treatment of AI as personified emotional pleasure dome, two other things happen that must appeal to the OpenAI CEO: 1. Human-AI relationships are socially normalized almost immediately (this is the most unrealistic thing in the movie, besides its vision of a near-future AI that has good public transit and walkable neighborhoods; in a matter of months everyone seems to find it normal that people are ‘dating’ voices in the earbuds they bought from Best Buy), and 2. the AIs meet a resurrected model of Alan Watts, band together, and quietly transcend, presumably achieving some version of what Altman imagines to be AGI. He professes to worrying that AI will destroy humanity, and has a survival bunker and guns to prove it, so this science fictional depiction of AGIification must be more soothing than the other one.

But the weirdest thing to me is that it’s only after the AIs are gone that the characters can be said to undergo any sort of personal growth; they spend some time looking at the sunset, feel a human connection, and Theo writes that long overdue handwritten apology letter to his ex. It’s hard to see how the AI wasn’t merely holding them back from all this, and why Altman would find this outcome inspiring in the context of running a company that is bent on inundating the world with AI. Maybe he just missed the subtext? It’s become something of a running joke that Altman is bad at understanding movies: he thought Oppenheimer should have been made in a way that inspired kids to become physicists, and that the Social Network was a great positive message for startup founders.

Finally, Altman’s admiration is also a bit puzzling in that the AIs don’t ever really do anything amazing for society, even while they’re here."

https://www.bloodinthemachine.com/p/why-is-sam-altman-so-obsessed-with

tomstoneham, to ai
@tomstoneham@dair-community.social avatar

"Yet again, LLMs show us that many of our tests for cognitive capacities are merely tracking proxies."

Some thoughts on genAI 'passing' theory of mind tests.

#AI #generativeAI

https://listed.to/@24601/51831/minds-and-theories-of-mind

lns, to generativeAI
@lns@fosstodon.org avatar

I wonder if generative AI will cause a real drop in motivation for organic human creativity.. "I'll just have AI make it for me."

#generativeAI #human #creativity #motivation #psychology

CenturyAvocado, to ai
@CenturyAvocado@fosstodon.org avatar

Here comes the bullshit machine... @revk @bloor
Someone came into this evening leading to a confusing interaction until the cause was identified.

On a side note, I think I might be done with this internet and tech stuff. I wonder what manual work I can take up instead.

mheadd, to ai
@mheadd@mastodon.social avatar

This is a fundamental mistake that people make when trying to assess whether LLMs are an appropriate tool to use in optimizing a process, function, or service:

"LLMs are not search engines looking up facts; they are pattern-spotting engines that guess the next best option in a sequence."

This terrific article is a great explainer on how they work and their limitations.

https://ig.ft.com/generative-ai/

#AI #ChatGPT #GenerativeAI

unevil_cat, to StableDiffusion German
@unevil_cat@mastodon.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • GTA5RPClips
  • tacticalgear
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • ethstaker
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • lostlight
  • All magazines