peterrenshaw, to ai
@peterrenshaw@ioc.exchange avatar

“Less than 24 hours after publication on our digital platforms, The #IrishTimes became aware that the column may not have been genuine. That prompted us to remove it from the site and to initiate a review, which is ongoing. It now appears that the #article and the accompanying #byline photo may have been produced, at least in part, using #GenerativeAI technology. It was a #hoax; the person we were corresponding with was not who they claimed to be. We had fallen victim to a deliberate and coordinated #deception.”

#AI / #journalism / #propaganda <https://irishtimes.com/ireland/2023/05/14/a-message-from-the-editor/>

pallenberg, to random
@pallenberg@mastodon.social avatar

As a photographer would scare the living daylight out of me.

The results of V5.1 are mind blowing and all of this evolved in just over a year. Wow!

itnewsbot, to random
@itnewsbot@schleuss.online avatar

3D Design With Text-Based AI - Generative AI is the new thing right now, proving to be a useful tool both for pro... - https://hackaday.com/2023/05/12/3d-design-with-text-based-ai/ #artificialintelligence #neuralradiancefields #texturedmeshes #generativeai #openai #model #news #3d

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Suppose i would like to measure the amount of bias, discrimination, hallucination etc in tools like Bard, Bing, ChatGPT and others. Are there already standards and tools to measure that ?
There will be discussions whether model A is better/worse then model B, it would be nice to have some standards/benchmarks for evaluation ? 🤔

itnewsbot, to ChatGPT
@itnewsbot@schleuss.online avatar

“Meaningful harm” from AI necessary before regulation, says Microsoft exec - Enlarge (credit: HJBC | iStock Editorial / Getty Images Plus)

... - https://arstechnica.com/?p=1938701

baldur, to random
@baldur@toot.cafe avatar

Regarding this post:

https://toot.cafe/@GovTrack@mastodon.social/110350101457791575

Language model plagiarism is an issue that nobody seems to want to talk about, even though vendors themselves say that direct copying from the training data set happens around 1% of the time. According to other researchers the rate varies. Sometimes less, at around 0.1%, which is still incredibly high for daily use. Sometimes more, at around 2%

And vendor tests, such as Microsoft's, are based on longer runs of text, so they wouldn't count this one.

bornach,
@bornach@masto.ai avatar

@baldur
What about indirect copying?

I bet nothing output by the #GenerativeAI in the following article would be considered "direct copying from training data"
https://www.digitaltrends.com/gaming/sumplete-chatgpt-ai-game-design-ethics/
#plagiarism

dangillmor, to random
@dangillmor@mastodon.social avatar

"(W)hat we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent."

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

chavan,

“AI art generators are trained on...millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history."

"Why should a for-profit company be permitted to feed the [work] of living artists into a program..so it can then be used to generate doppelganger versions of those very artists’ work, with the benefits flowing to everyone but the artists themselves?" #generativeAI

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

saraheskens, to random

if more and more content is made with #generativeAI, then our information environment is slowly but increasingly filled with bland texts, providing weak 'you could say A or non-A' type of arguments, and impersonal analysis. What does the increase of such texts mean for learning, including both human and machine learning? We learn by reading the works of others, and machines are trained on other works.

Does generative AI in the end lead to the DEGENERATION of writing, analysis, and creativity?

Norobiik, to random
@Norobiik@noc.social avatar

"#TVandFilm #writers in the US – 11,500 of them – have walked off their jobs for the first time in 15 years. On May 2, their negotiations with the Alliance of Motion Picture and Television Producers broke down. One of the bargaining points: the role of AI in writing scripts"

How #AI factors into #Hollywood writers strike [Podcast] | #WGAStrike #GenerativeAI | Al Jazeera
https://www.aljazeera.com/podcasts/2023/5/9/how-ai-factors-into-hollywoods-writers-strike

ppatel, to random
@ppatel@mstdn.social avatar

Considering this set of principles by which tries to train its , I found that it does not always meet those principles.

Anthropic, an AI startup founded by former OpenAI staff and that raised $1.3B, including $300M from , details its “constitutional AI” for safer .

https://www.theverge.com/2023/5/9/23716746/ai-startup-anthropic-constitutional-ai-safety

creativecommons, to random
@creativecommons@mastodon.social avatar

🎉 We are excited to share the Generative AI at MozFest 2023 Report, a collaboration between Creative Commons and the Movement for a Better Internet! This report highlights the key insights from our session at Mozilla Festival 2023, where we discussed the opportunities, risks, and potential solutions of generative AI.

Read the full report here ➡️ https://loom.ly/1Bzu4EA

jbzfn, to random
@jbzfn@mastodon.social avatar

「 The book, titled “Automating DevOps with GitLab CI/CD Pipelines,” just like Cowell’s, listed as its author one Marie Karpos, whom Cowell had never heard of. When he looked her up online, he found literally nothing — no trace. That’s when he started getting suspicious.

The book bears signs that it was written largely or entirely by an artificial intelligence language model, using software such as OpenAI’s ChatGPT 」
@washingtonpost


https://www.washingtonpost.com/technology/2023/05/05/ai-spam-websites-books-chatgpt/

ppatel, to opensource
@ppatel@mstdn.social avatar

Step 1: Create the problem.
Step 2: Make Money.
Step 3: Promise to solve the problem.
Step 4: Make money.

Pay attention to the last paragraph in this piece. Hint, they say open source sucks.

After #GPTZero gained 1.2M users since January, co-founder Edward Tian raised $3.5M to launch Origin, aimed at "saving journalism" by detecting #AI disinformation.

https://www.bloomberg.com/news/articles/2023-05-08/gptzero-seeks-to-thwart-plagerism-in-schools-online-media

#GenerativeAI #MachineLearning #OpenSource

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #WallStreet #SiliconValley: "To get a piece of that sweet AI-craze money, even the most powerful tech moguls are trying to make it seem as if their company is the real leader in AI, embracing the timeless truth passed down by Will Ferrell's fictional race car driver Ricky Bobby: "If you ain't first, you're last."

Wall Street, never one to miss a trend, has also embraced the AI hype. But as Daniel Morgan, a senior portfolio manager at Synovus Trust, said in an interview with Bloomberg TV, "This AI hype doesn't really trickle down into any huge profit growth. It's just a lot of what can happen in the future." AI-driven products are not bringing in big bucks yet, but the concept is already pumping valuations.

That is what makes the hype cycle a Hail Mary: Silicon Valley is hoping and praying that AI hype can keep customers and investors distracted until their balance sheets can bounce back. Sure, rushing out an unproven new technology to distract from the problems of the tech industry and global economy may be a bit ill-advised. But, hey, if society suffers a little along the way, well — that's what happens when you move fast and break things."

https://www.businessinsider.com/ai-technology-chatgpt-silicon-valley-save-business-stock-market-jobs-2023-5

baldur, to random
@baldur@toot.cafe avatar

I wonder how long it’ll take for fans of AI art to discover that it both has a specific aesthetic and that aesthetics eventually fall out of popular fashion?

“That looks soooo 2023”.

bornach,
@bornach@masto.ai avatar

@baldur
Well this artist seems to put a tremendous amount of effort into developing his own signature #AIArt aesthetic

https://youtu.be/K0ldxCh3cnI

He spends so much time Photoshopping the #StableDiffusion in-painting that the #GenerativeAI now represents about 40% of the whole workflow. Good quality art still requires the artist put in a lot of their own time and effort integrating the new tools into their overall vision.

bornach, to random
@bornach@masto.ai avatar

Weighing in on that study that found to be more empathetic than human physicians,
@rebeccawatson finds the research lacked certain rigor - the authors participating in the "blind" study, and whether the diagnosis was even correct didn't feature very highly in their assessment of quality

https://youtu.be/zRWm1E2Bn-U

ErikJonker, to random
@ErikJonker@mastodon.social avatar

So now we can all create endless AI generated pictures with Bing chat, an example made with "Create a picture of a modern house on an exposed cliff" , quite nice. #AI #GenerativeAI #Bing

yurnidiot, to random
@yurnidiot@mstdn.social avatar

Q: what do you get when you cross chicken nuggs and pizza?

A: more nightmare fuel.

AI Generated Commercial for Pizza Nuggets made with @RunwayML Gen-2.

source: AI Lost Media
https://www.youtube.com/watch?v=Zrg4t3_PdLM

#AI #AIArt #AIGenerated #GenerativeAI #GenerativeArt #pizzanuggets #nightmarefuel

video/mp4

bigdata, to random

🆕 Newsletter 🚀 Building software systems with LLMs and other Generative Models will primarily involve writing text instructions → I explore the fascinating world of prompt engineering, LLMs & #NLProc pipelines.
#MachineLearning #GenerativeAI #LLMs
🔗 https://gradientflow.substack.com/p/the-future-of-prompt-engineering

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ChatGPT #HumanRights: "ChatGPT and generative artificial intelligence (AI) are dominating headlines and conversations. We see it when people post strange and intriguing screenshots of chatbot conversations or images on social media, and we can now “interact” with chatbots on search platforms. But what’s behind this technology? Who feeds it data and decides where the data comes from? What does this have to do with human rights? Senior Web Producer Paul Aufiero speaks with Anna Bacciarelli, program manager in Human Rights Watch’s Tech and Human Rights division, about the questions at the center of this new debate, as companies race to develop and implement generative AI."

https://www.hrw.org/news/2023/05/03/pandoras-box-generative-ai-companies-chatgpt-and-human-rights

yurnidiot, to homebrewing
@yurnidiot@mstdn.social avatar

first pizza, now an AI generated beer commercial set to smash mouth's all star. it’s only uphill from here on out, folks.

source: https://privateisland.tv/project

#AI #AIArt #AIGenerated #GenerativeAI #GenerativeArt #beer

video/mp4

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #LLMs #Chatbots #ChatGPT: "Do you think the public has been too credulous about ChatGPT?

It’s not just the public. Some of your friends at your newspaper have been a bit credulous. In my book, “Rebooting A.I.,” we talked about the Eliza effect — we called it the “gullibility gap.” In the mid-1960s, Joseph Weizenbaum wrote this primitive piece of software called Eliza, and some people started spilling their guts to it. It was set up as a psychotherapist, and it was doing keyword matching. It didn’t know what it was talking about, but it wrote text, and people didn’t understand that a machine could write text and not know what it was talking about. The same thing is happening right now. It is very easy for human beings to attribute awareness to things that don’t have it. The cleverest thing that OpenAI did was to have GPT type its answers out one character at a time — made it look like a person was doing it. That adds to the illusion. It is sucking people in and making them believe that there’s a there there that isn’t there. That’s dangerous. We saw the Jonathan Turley incident, when it made up sexual harassment charges. You have to remember, these systems don’t understand what they’re reading. They’re collecting statistics about the relations between words. If everybody looked at these systems and said, “It’s kind of a neat party trick, but haha, it’s not real,” it wouldn’t be so disconcerting. But people believe it because it’s a search engine. It’s from Microsoft. We trust Microsoft. Combine that human overattribution with the reality that these systems don’t know what they’re talking about and are error-prone, and you have a problem."

https://www.nytimes.com/interactive/2023/05/02/magazine/ai-gary-marcus.html

bornach, to random
@bornach@masto.ai avatar
BenjaminHan, to gpt
@BenjaminHan@sigmoid.social avatar

1/

Solving causal tasks is a hallmark of intelligence. One recent study [1] categorizes these tasks into covariance-based and logic-based reasoning (screenshot) and examines how models perform on causal discovery, actual causality, and causal judgments.

BenjaminHan,
@BenjaminHan@sigmoid.social avatar

6/

The moral of the story: when investigating capabilities of black-box , always perform memorization tests first on the benchmark datasets!

BenjaminHan,
@BenjaminHan@sigmoid.social avatar

7/

[1] Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan. 2023. Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. http://arxiv.org/abs/2305.00050

[2] Joris M. Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, and Bernhard Schölkopf. 2016. Distinguishing Cause from Effect Using Observational Data: Methods and Benchmarks. Journal of machine learning research: JMLR, 17(32):1–102. https://jmlr.org/papers/v17/14-518.html

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • ngwrru68w68
  • megavids
  • cubers
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • lostlight
  • All magazines