br00t4c, to generativeAI
@br00t4c@mastodon.social avatar

STAT+: Venture capitalist Bob Kocher on generative AI startups, Change Healthcare cyberattack

https://www.statnews.com/2024/05/17/bob-kocher-venture-capitalist-health-tech-stat-summit/?utm_campaign=rss

modean987, to generativeAI
@modean987@mastodon.world avatar
br00t4c, to generativeAI
@br00t4c@mastodon.social avatar
FatherEnoch, to ai
@FatherEnoch@mastodon.online avatar

AI might be cool, but it’s also a big fat liar, and we should probably be talking about that more.

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong

#AI
#generativeAI
#generative_AI
#generative_art
#TheVerge

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "I want to be very clear: I am a cis woman and do not have a beard. But if I type “show me a picture of Alex Cranz” into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!

Meta AI isn’t the only one to struggle with the minutiae of The Verge’s masthead. ChatGPT told me yesterday I don’t work at The Verge. Google’s Gemini didn’t know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)

When you ask these bots about things that actually matter they mess up, too. Meta’s 2022 launch of Galactica was so bad the company took the AI down after three days. Earlier this year, ChatGPT had a spell and started spouting absolute nonsense, but it also regularly makes up case law, leading to multiple lawyers getting into hot water with the courts.

The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things."

https://www.theverge.com/2024/5/15/24154808/ai-chatgpt-google-gemini-microsoft-copilot-hallucination-wrong

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "With Senate Majority Leader Chuck Schumer releasing a sweeping “roadmap” for AI legislation today and major product announcements from OpenAI and Google, it’s been a big week for AI… and it’s only Wednesday.

But amid the ever-quickening pace of action, some observers wonder if government is looking at the tech industry with the right perspective. A report shared first with DFD from the nonprofit Data & Society argues that in order for powerful AI to integrate successfully with humanity, it must actually feature… the humanities.

Data & Society’s Serena Oduro and Tamara Kneese write that social scientists and other researchers should be directly involved in federally funded efforts to regulate and analyze AI. They say that given the unpredictable impact it might have on how people live, work and interact with institutions, AI development should involve non-STEM experts at every step.

“Especially with a general purpose technology, it is very hard to anticipate what exactly this technology will be used for,” said Kneese, a Data & Society senior researcher."

https://www.politico.com/newsletters/digital-future-daily/2024/05/15/ai-data-society-report-humanities-00158195

remixtures, to Sony Portuguese
@remixtures@tldr.nettime.org avatar

Sony Music is the prototype of the company that uses artists as mere puppets for getting the only thing it really wants: free money extracted through IP rents. It's a parasite that doesn't contribute at all to the promotion of arts and science.

: "Sony Music is sending warning letters to more than 700 artificial intelligence developers and music streaming services globally in the latest salvo in the music industry’s battle against tech groups ripping off artists.

The Sony Music letter, which has been seen by the Financial Times, expressly prohibits AI developers from using its music — which includes artists such as Harry Styles, Adele and Beyoncé — and opts out of any text and data mining of any of its content for any purposes such as training, developing or commercialising any AI system.

Sony Music is sending the letter to companies developing AI systems including OpenAI, Microsoft, Google, Suno and Udio, according to those close to the group.

The world’s second-largest music group is also sending separate letters to streaming platforms, including Spotify and Apple, asking them to adopt “best practice” measures to protect artists and songwriters and their music from scraping, mining and training by AI developers without consent or compensation. It has asked them to update their terms of service, making it clear that mining and training on its content is not permitted.

Sony Music declined to comment further."

https://www.ft.com/content/c5b93b23-9f26-4e6b-9780-a5d3e5e7a409

drahardja, to ai
@drahardja@sfba.social avatar

On the other hand, successful lawsuits against companies for the output of lousy chatbots will put a dollar amount on the liability of using chatbots to talk to customers, and may actually reduce their usage. https://mastodon.social/@arstechnica/112452961167345476

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The reality is that A.I. models can often prepare a decent first draft. But I find that when I use A.I., I have to spend almost as much time correcting and revising its output as it would have taken me to do the work myself.

And consider for a moment the possibility that perhaps A.I. isn’t going to get that much better anytime soon. After all, the A.I. companies are running out of new data on which to train their models, and they are running out of energy to fuel their power-hungry A.I. machines. Meanwhile, authors and news organizations (including The New York Times) are contesting the legality of having their data ingested into the A.I. models without their consent, which could end up forcing quality data to be withdrawn from the models.

Given these constraints, it seems just as likely to me that generative A.I. could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests."

https://www.nytimes.com/2024/05/15/opinion/artificial-intelligence-ai-openai-chatgpt-overrated-hype.html

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Which is the most accurate AI system for generating code? Surprisingly, there isn’t currently a good way to answer questions like these.

Based on HumanEval, a widely used benchmark for code generation, the most accurate publicly available system is LDB (short for LLM debugger).1 But there’s a catch. The most accurate generative AI systems, including LDB, tend to be agents,2 which repeatedly invoke language models like GPT-4. That means they can be orders of magnitude more costly to run than the models themselves (which are already pretty costly). If we eke out a 2% accuracy improvement for 100x the cost, is that really better?

In this post, we argue that:

  • AI agent accuracy measurements that don’t control for cost aren’t useful.

  • Pareto curves can help visualize the accuracy-cost tradeoff.

  • Current state-of-the-art agent architectures are complex and costly but no more accurate than extremely simple baseline agents that cost 50x less in some cases.

  • Proxies for cost such as parameter count are misleading if the goal is to identify the best system for a given task. We should directly measure dollar costs instead.

  • Published agent evaluations are difficult to reproduce because of a lack of standardization and questionable, undocumented evaluation methods in some cases."

https://www.aisnakeoil.com/p/ai-leaderboards-are-no-longer-useful

drahardja, to generativeAI
@drahardja@sfba.social avatar

Much as I dislike the theft of human labor that feeds many of the #generativeAI products we see today, I have to agree with @pluralistic that #copyright law is the wrong way to address the problem.

To frame the issue concretely: think of whom copyright law has benefited in the past, and then explain how it would benefit the individual creator when it is applied to #AI. (Hint: it won’t.)

Copyright law is already abused and extended to an absurd degree today. It already overreaches. It impoverishes society by putting up barriers to creation and allowing toll-collectors to exist between citizen artists and their audience.

Labor law is likely what we need to lean on. #unions and #guilds protect creators in a way that copyright cannot. Inequality and unequal bargaining power that lead to exploitation of artists and workers is what we need to address head-on.

Copyright will not save us.

“AI "art" and uncanniness”

https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Energy #DataCenters: "The rapid growth of the technology industry and the increasing reliance on cloud computing and artificial intelligence have led to a boom in the construction of data centers across the United States. Electric vehicles, wind and solar energy, and the smart grid are particularly reliant on data centers to optimize energy utilization. These facilities house thousands of servers that require constant cooling to prevent overheating and ensure optimal performance.

Unfortunately, many data centers rely on water-intensive cooling systems that consume millions of gallons of potable (“drinking”) water annually. A single data center can consume up to 3 million to 5 million gallons of drinking water per day, enough to supply thousands of households or farms.

The increasing use and training of AI models has further exacerbated the water consumption challenges faced by data centers."

https://archive.ph/7bunV

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #GPT4o: "The smooth interactivity that OpenAI has laboured hard to enable does well to paper over the cracks of the underlying technology. When ChatGPT first elbowed its way noisily into our lives in November 2022, those who had been following the technology for decades pointed out that AI in its current form was little more than snazzy pattern-matching technology – but they were drowned out by the excited masses. The next step towards human-like interaction is only going to amplify the din.

That’s great news for OpenAI, a company already valued at more than $80bn, and with investment from the likes of Microsoft. Its CEO, Sam Altman, tweeted last week that GPT-4o “feels like magic to me”. It’s also good news for others in the AI space, who are capitalising on the ubiquity of the technology and layering it into every aspect of our lives. Microsoft Word and PowerPoint now come with generative AI tools folded into them. Meta, the parent company of Facebook and Instagram, is putting its AI chatbot assistant into its apps in many countries, much to some users’ chagrin.

But it’s less good for ordinary users. Less friction between asking an AI system to do something and it actually completing the task is good for ease of use, but it also helps us forget that we’re not interacting with sentient beings. We need to remember that, because AI is not infallible; it comes with biases and environmental issues, and reflects the interests of its makers. These pressing issues are explored in my book, and the experts I spoke to tell me they represent significant concerns for the future."
https://www.theguardian.com/commentisfree/article/2024/may/14/chat-gtp-40-ai-human-corporate-product

bespacific, to generativeAI
@bespacific@newsie.social avatar

Fake studies have flooded publishers of top leading to thousands of , M of $ in lost revenue. Biggest hit has come to 217-year-old based in Hoboken NJ which announced it is closing 19 journals, some of which were infected by large-scale research . Wiley has reportedly had to retract more than 11,300 papers recently “that appeared compromised” as makes it easier for paper mills to peddle fake research. https://www.wsj.com/science/academic-studies-research-paper-mills-journals-publishing-f5a3d4bc

Andbaker, to academia
@Andbaker@aus.social avatar

Uncited generative AI use by students in coursework

I teach a course where I allow the use of generative AI. My university rules allow this and the students are instructed that they must cite the use of generative AI. I have set the same Laboratory Report coursework for the last two years. And students submit their work through TurnItIn. so I can see what the TurnItIn Ai checker is reporting.

http://andy-baker.org/2024/05/15/uncited-generative-ai-use-by-students-in-coursework/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AITraining #Copyright #GenerativeAI #IP #Creativity #Art: "Creating an individual bargainable copyright over training will not improve the material conditions of artists' lives – all it will do is change the relative shares of the value we create, shifting some of that value from tech companies that hate us and want us to starve to entertainment companies that hate us and want us to starve.

As an artist, I'm foursquare against anything that stands in the way of making art. As an artistic worker, I'm entirely committed to things that help workers get a fair share of the money their work creates, feed their families and pay their rent.

I think today's AI art is bad, and I think tomorrow's AI art will probably be bad, but even if you disagree (with either proposition), I hope you'll agree that we should be focused on making sure art is legal to make and that artists get paid for it.

Just because copyright won't fix the creative labor market, it doesn't follow that nothing will. If we're worried about labor issues, we can look to labor law to improve our conditions."

https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand

bornach, to OpenAI
@bornach@masto.ai avatar

Best part of this [AI Explained] video on 's is at 5:45
"Even though it failed all my maths prompts it is still a big improvement..."
https://youtu.be/ZJbu3NEPJN0?t=5m45s
That in a nutshell sums up the state of AI news coverage on social media

Jigsaw_You, to OpenAI
@Jigsaw_You@mastodon.nl avatar

@garymarcus spot-on…

has presumably pivoted to new features precisely because they don’t know how to produce the kind of capability advance that the “exponential improvement” would have predicted”

https://garymarcus.substack.com/p/hot-take-on-openais-new-gpt-4o?r=8tdk6&utm_campaign=post&utm_medium=web&triedRedirect=true

modean987, to generativeAI
@modean987@mastodon.world avatar
br00t4c, to generativeAI
@br00t4c@mastodon.social avatar
opentermsarchive, to generativeAI
@opentermsarchive@mastodon.lescommuns.org avatar

What can we discover by reading the terms and conditions of tools? What do users consent to? What are the regulatory responses in 🇪🇺 🇨🇳 🇺🇸?
Join our online event on May 23 at 16:30 UTC+2 to discover the Watch project!
https://www.sciencespo.fr/ecole-droit/en/events/generative-ai-watch/
We will present a dataset of terms and conditions of major generative services, some of the discoveries that we made when tracking their changes, and how the changing regulatory landscape could impact those terms.

br00t4c, to ai
@br00t4c@mastodon.social avatar
remixtures, to apple Portuguese
@remixtures@tldr.nettime.org avatar

: "Smartphones and tablets were invented to enhance our lived experience, to make it easier to leave the house and go to the beach and meet up with friends — just a good camera-computer combo that fits in your pocket.

Theoretically, our phones and tablets will become even more useful with AI, serving as virtual assistants that can do all the boring stuff we don’t want to, like summarizing all your new emails and filtering out junk. There’s a world in the not too distant future, according to AI proponents, where you can simply tell Siri or Google “order my usual breakfast from the coffee shop near the office, I’ll be there in 10 minutes to pick it up,” and the bot will do just that.

We’re not there yet, however. And so far, the consumer applications for AI are simultaneously underwhelming and dystopian.

Distorted images may be harmless social media fodder, until they become propaganda spread by bad actors."

https://edition.cnn.com/2024/05/10/business/ai-dystopia-silicon-valley-nightcap/index.html

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #HumanRights: "You point out that AI isn’t just some benign cloud floating above our heads. It’s based on material extraction and the exploitation of workers, mainly in the global south, and it’s incredibly polluting to run. But so much of this is hidden from view. How do we go about tackling these impacts?

It is a huge question. One way of dealing with it is by looking at the question of AI adoption from an ESG [environmental, social and governance] perspective. All of the equipment that we use, the phones that we’re talking on now, are built from minerals often taken from conflict regions, including with child labour. Being aware of that hopefully can help shift societal demands and consumer habits. You can use generative AI to make a hilarious meme, but how much water and energy are you expending? Couldn’t you just pick up a pencil, and might that actually be more satisfying?

Do you sometimes wish that AI could be put back on the shelf?

It’s not an all-or-nothing equation between banning AI or embracing it into every aspect of your life. It’s a question of choosing what we want to use AI for. Being critical and asking questions doesn’t mean that you’re against AI: it just means you’re against AI hype."

https://www.theguardian.com/technology/article/2024/may/11/human-rights-lawyer-susie-alegre-ai-artificial-intelligence-human-rights-robot-wrongs-book-interview?utm_source=pocket_saves

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #AIRegulation #TechPolicy: "The overwhelming message that emerges from these books, ironic as it may seem, is a newfound appreciation of the collective powers of human creativity. We rightly marvel at the wonders of AI, but still more astonishing are the capabilities of the human brain, which weighs 1.4kg and consumes just 25 watts of power. For good reason, it has been called the most complex organism in the known universe.

As the authors admit, humans are also deeply flawed and capable of great stupidity and perverse cruelty. For that reason, the technologically evangelical wing of Silicon Valley actively welcomes the ascent of AI, believing that machine intelligence will soon supersede the human kind and lead to a more rational and harmonious universe. But fallibility may, paradoxically, be inextricably intertwined with intelligence. As the computer pioneer Alan Turing noted, “If a machine is expected to be infallible, it cannot also be intelligent.” How intelligent do we want our machines to be?"

https://www.ft.com/content/32f6a003-e5b4-442a-9a5d-37bdc1c6d392?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8#myft:notification:daily-email:content

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • thenastyranch
  • mdbf
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • Youngstown
  • cisconetworking
  • slotface
  • vwfavf
  • everett
  • Durango
  • rosin
  • tester
  • GTA5RPClips
  • khanakhh
  • osvaldo12
  • ngwrru68w68
  • anitta
  • normalnudes
  • ethstaker
  • cubers
  • modclub
  • tacticalgear
  • provamag3
  • Leos
  • JUstTest
  • All magazines