br00t4c, to generativeAI
@br00t4c@mastodon.social avatar
uniinnsbruck, to Futurology
@uniinnsbruck@social.uibk.ac.at avatar

Physicists developed a new method to prepare quantum operations on a given quantum computer using a machine learning generative model to find the appropriate sequence of quantum gates to execute a quantum operation. The study, recently published in Nature Machine Intelligence, marks a significant step forward in unleashing the full extent of quantum computing.

📣 https://www.uibk.ac.at/en/newsroom/2024/how-ai-helps-programming-a-quantum-computer/

@fwf @ERC_Research

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Suno, a generative AI music company, has raised $125 million in its latest funding round, according to a post on the company’s blog. The AI music firm, which is one of the rare start-ups that can generate voice, lyrics and instrumentals together, says it wants to usher in a “future where anyone can make music.”

Suno allows users to create full songs from simple text prompts. While most of its technology is proprietary, the company does lean on OpenAI’s ChatGPT for lyric and title generation. Free users can generate up to 10 songs per month, but with its Pro plan ($8 per month) and Premier plan ($24 per month), a user can generate up to 500 songs or 2,000 songs, respectively, on a monthly basis and are given “general commercial terms.”"

https://www.billboard.com/business/tech/ai-music-company-suno-raises-new-funding-round-1235688773/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Now, I do see why Altman likes it so much; besides its treatment of AI as personified emotional pleasure dome, two other things happen that must appeal to the OpenAI CEO: 1. Human-AI relationships are socially normalized almost immediately (this is the most unrealistic thing in the movie, besides its vision of a near-future AI that has good public transit and walkable neighborhoods; in a matter of months everyone seems to find it normal that people are ‘dating’ voices in the earbuds they bought from Best Buy), and 2. the AIs meet a resurrected model of Alan Watts, band together, and quietly transcend, presumably achieving some version of what Altman imagines to be AGI. He professes to worrying that AI will destroy humanity, and has a survival bunker and guns to prove it, so this science fictional depiction of AGIification must be more soothing than the other one.

But the weirdest thing to me is that it’s only after the AIs are gone that the characters can be said to undergo any sort of personal growth; they spend some time looking at the sunset, feel a human connection, and Theo writes that long overdue handwritten apology letter to his ex. It’s hard to see how the AI wasn’t merely holding them back from all this, and why Altman would find this outcome inspiring in the context of running a company that is bent on inundating the world with AI. Maybe he just missed the subtext? It’s become something of a running joke that Altman is bad at understanding movies: he thought Oppenheimer should have been made in a way that inspired kids to become physicists, and that the Social Network was a great positive message for startup founders.

Finally, Altman’s admiration is also a bit puzzling in that the AIs don’t ever really do anything amazing for society, even while they’re here."

https://www.bloodinthemachine.com/p/why-is-sam-altman-so-obsessed-with

remixtures,
@remixtures@tldr.nettime.org avatar

"This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition. Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.

You can see this dynamic playing out in OpenAI’s content-licensing agreements, which it has struck with platforms such as Reddit and news organizations such as Axel Springer and Dotdash Meredith. Recently, a tech executive I spoke with compared these types of agreements to a hostage situation, suggesting they believe that AI companies will find ways to scrape publishers’ websites anyhow, if they don’t comply. Best to get a paltry fee out of them while you can, the person argued.

The Johansson accusations only compound (and, if true, validate) these suspicions. Altman’s alleged reasoning for commissioning Johansson’s voice was that her familiar timbre might be “comforting to people” who find AI assistants off-putting. Her likeness would have been less about a particular voice-bot aesthetic and more of an adoption hack or a recruitment tool for a technology that many people didn’t ask for, and seem uneasy about. Here, again, is the logic of OpenAI at work. It follows that the company would plow ahead, consent be damned, simply because it might believe the stakes are too high to pivot or wait. When your technology aims to rewrite the rules of society, it stands that society’s current rules need not apply."

https://www.theatlantic.com/technology/archive/2024/05/openai-scarlett-johansson-sky/678446

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Without some minimal agreement as to what those basic human capabilities are—what activities belong to the jurisdiction of our species, not to be usurped by machines—it becomes difficult to pin down why some uses of artificial intelligence delight and excite, while others leave many of us feeling queasy.

What makes many applications of artificial intelligence so disturbing is that they don’t expand our mind’s capacity to think, but outsource it. AI dating concierges would not enhance our ability to make romantic connections with other humans, but obviate it. In this case, technology diminishes us, and that diminishment may well become permanent if left unchecked.

Over the long term, human beings in a world suffused with AI-enablers will likely prove less capable of engaging in fundamental human activities: analyzing ideas and communicating them, forging spontaneous connections with others, and the like. While this may not be the terrifying, robot-warring future imagined by the Terminator movies, it would represent another kind of existential catastrophe for humanity."

https://www.theatlantic.com/ideas/archive/2024/05/ai-dating-algorithms-relationships/678422/

magnetichuman, to generativeAI
@magnetichuman@cupoftea.social avatar

Companies today are trying to putting Generative AI into everything with the same enthusiasm that they put Radium into consumer products in the 1930s
#GenerativeAI #AIrisks

tomstoneham, to ai
@tomstoneham@dair-community.social avatar

"Yet again, LLMs show us that many of our tests for cognitive capacities are merely tracking proxies."

Some thoughts on genAI 'passing' theory of mind tests.

https://listed.to/@24601/51831/minds-and-theories-of-mind

lns, to generativeAI
@lns@fosstodon.org avatar

I wonder if generative AI will cause a real drop in motivation for organic human creativity.. "I'll just have AI make it for me."

lns,
@lns@fosstodon.org avatar

@etherdiver Can you elaborate please? I can definitely see this in the creative professional field, for example.

etherdiver,
@etherdiver@ravenation.club avatar

@lns most creative people create things not as a product but because they have a drive to create: a lot of times the final product is almost incidental.

Doing creative stuff for your job is rarely an actual expression of your creativity, even if it requires some element of creativity.

Will creative people cheat and use AI for their jobs? Maybe, if it ever gets to the point where it doesn't suck. Work is work, after all.

Will they stop being creative for creativity's sake? Absolutely not.

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Copyright #IP: "Generative artificial intelligence (AI) has the potential to augment and democratize creativity. However, it is undermining the knowledge ecosystem that now sustains it. Generative AI may unfairly compete with creatives, displacing them in the market. Most AI firms are not compensating creative workers for composing the songs, drawing the images, and writing both the fiction and non-fiction books that their models need in order to function. AI thus threatens not only to undermine the livelihoods of authors, artists, and other creatives, but also to destabilize the very knowledge ecosystem it relies on.

Alarmed by these developments, many copyright owners have objected to the use of their works by AI providers. To recognize and empower their demands to stop non-consensual use of their works, we propose a streamlined opt-out mechanism that would require AI providers to remove objectors’ works from their databases once copyright infringement has been documented. Those who do not object still deserve compensation for the use of their work by AI providers. We thus also propose a levy on AI providers, to be distributed to the copyright owners whose work they use without a license. This scheme is designed to ensure creatives receive a fair share of the economic bounty arising out of their contributions to AI. Together these mechanisms of consent and compensation would result in a new grand bargain between copyright owners and AI firms, designed to ensure both thrive in the long-term."

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4826695

remixtures, to UX Portuguese
@remixtures@tldr.nettime.org avatar

#UX #UserExperience #OpenAI #AI #GPT4o #GenerativeAI: "It is unethical to slap an interface, which convincingly simulates 100% confidence, onto a product which is anything less than 100% accurate, let alone a product that CTO, Mira Murati, calls “pretty good”.

No exceptions; no “it will get better”. If the house doesn’t have a roof, don’t paint the walls.

This does not mean that reduction or removal of complexity is inherently deceitful, but it does mean that the complexity which informs a person, not how, but why something works the way it does can be an important factor in them deciding to use it.

Nothing could make this more evident than the crypto/web3 community’s obsession with “mass adoption” which they generally resolve to being a UX problem. They know that the complexity of crypto is intimidating to non-technical people (crimes and scams aside) so they relentlessly try to remove as much of the complexity as possible.

The unfortunate thing about removing complexity is that you never remove it, but rather, you move it to another place. The other place is always what crypto people like to call a “trusted third party” the very thing that Bitcoin, was created to eliminate."

https://fasterandworse.com/known-purpose-and-trusted-potential/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For years now, OpenAI told everyone that these were all secondary concerns — that its deeper ambition was something nobler, and more public-spirited. But since Altman’s return, the company has been telling a different story: a story about winning at all costs.

And why bother with superalignment, when there’s winning to do?

Why bother getting actresses’ permission, when the right numbers are all still going up?"

https://www.platformer.news/open-ai-scarlett-johansson-her-voice-sam-altman/

CenturyAvocado, to ai
@CenturyAvocado@fosstodon.org avatar

Here comes the bullshit machine... @revk @bloor
Someone came into this evening leading to a confusing interaction until the cause was identified.

On a side note, I think I might be done with this internet and tech stuff. I wonder what manual work I can take up instead.

CenturyAvocado,
@CenturyAvocado@fosstodon.org avatar

@revk @bloor Haha.. i didn't even notice the "Content" bullet point below.. "The channel contains a mix of users, including A&A staff, customers, and other individuals. The discussion topics can range from general legal questions to specific cases and laws related to the UK."

mheadd, to ai
@mheadd@mastodon.social avatar

This is a fundamental mistake that people make when trying to assess whether LLMs are an appropriate tool to use in optimizing a process, function, or service:

"LLMs are not search engines looking up facts; they are pattern-spotting engines that guess the next best option in a sequence."

This terrific article is a great explainer on how they work and their limitations.

https://ig.ft.com/generative-ai/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "More broadly, across news media coverage of AI in general, reviewing 30 published studies, Saba Rebecca Brause and her coauthors find that, while there are of course exceptions, most research so far find not just a strong increase in the volume of reporting on AI, but also “largely positive evaluations and economic framing” of these technologies.

So, perhaps, as Timit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), has written on X: “The same news orgs hype stuff up during ‘AI summers’ without even looking into their archives to see what they wrote decades ago?”

There are some really good reporters doing important work to help people understand AI—as well as plenty of sensationalist coverage focused on killer robots and wild claims about possible future existential risks.

But, more than anything, research on how news media cover AI overall suggests that Gebru is largely right – the coverage tends to be led by industry sources, and often take claims about what the technology can and can’t do, and might be able to do in the future, at face value in ways that contributes to the hype cycle."

https://reutersinstitute.politics.ox.ac.uk/news/how-news-coverage-often-uncritical-helps-build-ai-hype

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The company’s leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the world’s input into how to do it justly and wisely.

But when there’s real money at stake — and there are astounding sums of real money at stake in the race to dominate AI — it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees — those who know the most about what’s happening inside OpenAI — can’t tell the rest of the world what’s going on.

The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. It’s hard to exercise accountability over a company whose former employees are restricted to saying “I resigned.”" https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ‱
  • megavids
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • JUstTest
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines