remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "When it comes to AI, the best defence is not to simply wrap ourselves in a protective legislative cocoon and demand another tough new law to preempt or repel every risk or act of harm.

Rather, it is about determining who has the power.

If we are going to embrace AI, let’s do so as active participants, not passive subjects. Let’s embed the notion of shared benefits with strong industrial guardrails. Let’s get AI out of the IT department and onto the shop floor. And let’s demand those driving the introduction of this technology do so with us, not to us; shaped by us, not shaping us; augmenting our labour, not automating it.

The lesson of the social media revolution has been that technology is neither innately good nor bad. What seemed like a positive tool to connect people on an open platform has become a threat to our collective wellbeing because of the underlying business model.

Approaching AI with this critical mindset, rather than naively embracing progress as a self-evident good, is the first step.

Thanks to scholars like Acemoglu and Johnson, we now have an economic argument to match the moral one: the adaptation of new technology can make us all richer and happier if we are given the chance to collectively design it and control it."

https://www.theguardian.com/australia-news/commentisfree/article/2024/jun/04/scarlett-johansson-wont-save-us-from-ai-but-if-workers-have-their-say-it-could-benefit-us-all

heimspielTV, to generativeAI German
@heimspielTV@augsburg.social avatar

CAI klärt jetzt auch auf Twitch über Cannabis auf. Er kennt sich mit dem Gesetz aus, kennt Risiken beim Umgang mit THC und informiert über Hanf im Allgemeinen und über verschiedene Sorten. Wusstest du dass CBD-haltiges Cannabis ohne THC keine psychischen Effekte hat, entspannend wirkt und beim einschlafen helfen kann?

https://twitch.tv/heimspieltv

BenjaminHan, to llm
@BenjaminHan@sigmoid.social avatar

1/

With applications more abundant, have researchers been using them to assist their writing? We know they have when writing peer reviews [1], but how about doing so in writing their published papers?

Liang et al comes back to answer this question in [3]. They applied the same corpus-based methodology proposed in [2] on 950k papers published between 2020 to 2024, and the answer is a resounding YES, esp. in CS (up to 17.5%) (screenshot 1).

chetwisniewski, to ai
@chetwisniewski@securitycafe.ca avatar

I don't think we give Meta, Google and OpenAI enough credit for their AI LLM accomplishments. I mean, who would have imagined we could spend billions of dollars and warmed the planet a few degrees all to teach computers to not be able to do math. It really is an astonishing achievement. #AI #generativeAI

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #VCs #CashingOut #AIHype #SPVs: "VCs are clamoring to invest in hot AI companies, willing to pay exorbitant share prices for coveted spots on their cap tables. Even so, most aren’t able to get into such deals at all. Yet, small, unknown investors, including family offices and high-net-worth individuals, have found their own way to get shares of the hottest private startups like Anthropic, Groq, OpenAI, Perplexity, and Elon Musk’s X.ai (the makers of Grok).

They are using special purpose vehicles, or SPVs, where multiple parties pool their money to share an allocation of a single company. SPVs are generally formed by investors who have direct access to the shares of these startups and then turn around and sell a part of their allocation to external backers, often charging significant fees while retaining some profit share (known as carry).

While SPVs aren’t new – smaller investors have relied on them for years – there’s a growing trend of SPVs successfully getting shares from the biggest names in AI.

These investors are finding that the most popular AI companies, except OpenAI, are not all that hard for them to buy at their smaller levels of investing."

https://techcrunch.com/2024/06/01/vcs-are-selling-shares-of-hot-ai-companies-like-anthropic-and-xai-to-small-investors-in-a-wild-spv-market

AlexJimenez, to ai
@AlexJimenez@mas.to avatar

Inside Anthropic, the Company Betting That Safety Can Be a Winning

https://time.com/6980000/anthropic/

adamsnotes, to generativeAI
@adamsnotes@me.dm avatar

AI clones of people are becoming a lot more common and a lot more worrying.

This one was taken down after Ali jumped through a bunch of hoops to prove her identity, but CivitAI currently only removes models if there is a complaint - they have no policy against creating models to impersonate real people.

--
What It’s Like Finding Your Nonconsensual AI Clone Online
https://www.404media.co/what-its-like-finding-your-nonconsensual-ai-clone-online/

#Deepfakes #GenerativeAI #CivitAI #404media

dalfen, to ai
@dalfen@mstdn.social avatar

Imagine— It might all be just a fad.


https://www.bbc.com/news/articles/c511x4g7x7jo

AlexJimenez, to ai
@AlexJimenez@mas.to avatar
denis, to generativeAI
@denis@ruby.social avatar

Literally every single thing ChatGPT tells me is provably wrong.

Generative AI is a fucking train wreck.

#generativeAI #GenAI #ChatGPT

crafty_crow, to ai
@crafty_crow@mastodon.sdf.org avatar

If AI tech bros are going to steal content for their generative AI, perhaps poisoning the well with mislabeled images, incorrect responses, and injecting instructions in content is well within our rights to fight back.

unevil_cat, to StableDiffusion German
@unevil_cat@mastodon.social avatar
aby, to tech
@aby@aus.social avatar

“Most people are not aware of the resource usage underlying ChatGPT,” Ren said. “If you’re not aware of the resource usage, then there’s no way that we can help conserve the resources.”

In July 2022, the month before OpenAI says it completed its training of GPT-4, Microsoft pumped in about 11.5 million gallons of water to its cluster of Iowa data centers, according to the West Des Moines Water Works. That amounted to about 6% of all the water used in the district, which also supplies drinking water to the city’s residents.

#tech #technology #AI #generativeAI #ChatGPT #microsoft #ClimateCrisis

https://apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumption-microsoft-f551fde98083d17a7e8d904f8be822c4?fbclid=IwZXh0bgNhZW0CMTEAAR3RSpm6xHK11bscxSH0LOYa_u0NzVqm82Q6rYJ6wY9I3CEHNJjy3AGXkYs_aem_AZajQUCRmv2g52SCEwjpSTEV1O3wZE25xpNndxjJRG0H3JKJBG-abCQJA12X_owD3rmSmRXu4wOOfUmjLs5KJzKf

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #BigTech #SiliconValley: "Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about. A separation letter on the termination documents, which you can read embedded below, says in plain language, “If you have any vested Units ... you are required to sign a release of claims agreement within 60 days in order to retain such Units.” It is signed by Kwon, along with OpenAI VP of people Diane Yoon (who departed OpenAI recently). The secret ultra-restrictive NDA, signed for only the “consideration” of already vested equity, is signed by COO Brad Lightcap.

Meanwhile, according to documents provided to Vox by ex-employees, the incorporation documents for the holding company that handles equity in OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.

Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI."

https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #Film #Movies #Her: "Now, I do see why Altman likes it so much; besides its treatment of AI as personified emotional pleasure dome, two other things happen that must appeal to the OpenAI CEO: 1. Human-AI relationships are socially normalized almost immediately (this is the most unrealistic thing in the movie, besides its vision of a near-future AI that has good public transit and walkable neighborhoods; in a matter of months everyone seems to find it normal that people are ‘dating’ voices in the earbuds they bought from Best Buy), and 2. the AIs meet a resurrected model of Alan Watts, band together, and quietly transcend, presumably achieving some version of what Altman imagines to be AGI. He professes to worrying that AI will destroy humanity, and has a survival bunker and guns to prove it, so this science fictional depiction of AGIification must be more soothing than the other one.

But the weirdest thing to me is that it’s only after the AIs are gone that the characters can be said to undergo any sort of personal growth; they spend some time looking at the sunset, feel a human connection, and Theo writes that long overdue handwritten apology letter to his ex. It’s hard to see how the AI wasn’t merely holding them back from all this, and why Altman would find this outcome inspiring in the context of running a company that is bent on inundating the world with AI. Maybe he just missed the subtext? It’s become something of a running joke that Altman is bad at understanding movies: he thought Oppenheimer should have been made in a way that inspired kids to become physicists, and that the Social Network was a great positive message for startup founders.

Finally, Altman’s admiration is also a bit puzzling in that the AIs don’t ever really do anything amazing for society, even while they’re here."

https://www.bloodinthemachine.com/p/why-is-sam-altman-so-obsessed-with

tomstoneham, to ai
@tomstoneham@dair-community.social avatar

"Yet again, LLMs show us that many of our tests for cognitive capacities are merely tracking proxies."

Some thoughts on genAI 'passing' theory of mind tests.

#AI #generativeAI

https://listed.to/@24601/51831/minds-and-theories-of-mind

lns, to generativeAI
@lns@fosstodon.org avatar

I wonder if generative AI will cause a real drop in motivation for organic human creativity.. "I'll just have AI make it for me."

CenturyAvocado, to ai
@CenturyAvocado@fosstodon.org avatar

Here comes the bullshit machine... @revk @bloor
Someone came into this evening leading to a confusing interaction until the cause was identified.

On a side note, I think I might be done with this internet and tech stuff. I wonder what manual work I can take up instead.

mheadd, to ai
@mheadd@mastodon.social avatar

This is a fundamental mistake that people make when trying to assess whether LLMs are an appropriate tool to use in optimizing a process, function, or service:

"LLMs are not search engines looking up facts; they are pattern-spotting engines that guess the next best option in a sequence."

This terrific article is a great explainer on how they work and their limitations.

https://ig.ft.com/generative-ai/

#AI #ChatGPT #GenerativeAI

unevil_cat, to StableDiffusion German
@unevil_cat@mastodon.social avatar
attacus, to ai
@attacus@aus.social avatar

Most products and business problems require deterministic solutions.
Generative models are not deterministic.
Every single company who has ended up in the news for an AI gaffe has failed to grasp this distinction.

There’s no hammering “hallucination” out of a generative model; it’s baked into how the models work. You just end up spending so much time papering over the cracks in the façade that you end up with a beautiful découpage.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #AISafety #AIEthics: "For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity.

They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out."

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

techchiili, to microsoft
@techchiili@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window."

https://disconnect.blog/ai-hype-is-over-ai-exhaustion-is-setting-in/

modean987, to generativeAI
@modean987@mastodon.world avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines