remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Despite my pessimism about the droves of AI marketing hype, if not AI washing, likely to barrage the next couple of years of tech announcements, I have hope that consumer interest and common sense will yield skepticism that stops some of the worst so-called AI gadgets from getting popular or misleading people."

https://arstechnica.com/gadgets/2024/04/ai-marketing-hype-is-coming-for-your-favorite-gadgets/

br00t4c, to ai
@br00t4c@mastodon.social avatar

Recruiters Are Going Analog to Fight the AI Application Overload

https://www.wired.com/story/recruiters-ai-application-overload/

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "In the EU, the GDPR requires that information about individuals is accurate and that they have full access to the information stored, as well as information about the source. Surprisingly, however, OpenAI openly admits that it is unable to correct incorrect information on ChatGPT. Furthermore, the company cannot say where the data comes from or what data ChatGPT stores about individual people. The company is well aware of this problem, but doesn’t seem to care. Instead, OpenAI simply argues that “factual accuracy in large language models remains an area of active research”. Therefore, noyb today filed a complaint against OpenAI with the Austrian DPA."

https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

caspar, to llm
@caspar@hachyderm.io avatar

I find that one great use for LLMs is for something a bit like rubber duck debugging, but for any topic.

You ask the LLM for its thoughts on a topic and respond. The LLM probably has no real insights to give you, since these things necessarily live in the world of cliché, but the process can help you to clarify your thoughts.

Then you can talk with a thoughtful and intelligent person to find the real errors in your thinking.

ThatChipGuy, to generativeAI
@ThatChipGuy@zeppelin.flights avatar

Looking for recent, credible polling about consumer attitudes toward AI, ideally released in 2024, a la https://www.elon.edu/u/news/2024/02/29/the-imagining-the-digital-future-center-technology-experts-general-public-forecast-impact-of-artificial-intelligence-by-2040/ and https://news.stonybrook.edu/newsroom/what-does-the-american-public-really-think-of-ai/ . Struggling with Google search results, as one does these days. Please boost for reach and share links with me.

remixtures, to apple Portuguese
@remixtures@tldr.nettime.org avatar

: "Apple has removed a number of AI image generation apps from the App Store after 404 Media found these apps advertised the ability to create nonconsensual nude images, a sign that app store operators are starting to take more action against these types of apps.

Overall, Apple removed three apps from the App Store, but only after we provided the company with links to the specific apps and their related ads, indicating the company was not able to find the apps that violated its policy itself.

Apple’s action comes after we reported on Monday that Instagram advertises nonconsensual AI nude apps. By browsing Meta’s Ad Library, which archives ads on its platform, when they ran, on what platforms, and who paid for them, we were able to find ads for five different apps, each with dozens of ads. Two of the ads were for web-based services, and three were for apps on the Apple App Store. Meta deleted the ads when we flagged them. Apple did not initially respond to a request for comment on that story, but reached out to me after it was published asking for more information. On Tuesday, Apple told us it removed the three apps on its App Store." https://www.404media.co/apple-removes-nonconsensual-ai-nude-apps-following-404-media-investigation/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "There seem to be clear indications of a novelty factor at work. And while novelty in and of itself is not a bad thing, if it isn’t followed with a consistent behavior change, we can’t really call it a trend.

Take the above Bing.com numbers for instance. If we credit the inclusion of AI search tools on the platform as the cause of the unique user bump, it would seemingly serve to solidify the predicted 25% drop. Yet when we consulted our panel data further, we found that only between 4% and 9% of users used Bing Chat (their AI agent) in any given month during 2023. What’s more, of those that did use it, only two to four searches were conducted over the ensuing month.

Which brings up an even more surprising finding.

While all of the traditional search engines had repeated searches from each user over the course of a month, the AI chatbots all displayed initial enthusiasm, followed by a steep decline in usage." https://datos.live/predicted-25-drop-in-search-volume-remains-unclear/

zdl, to ai
@zdl@mastodon.online avatar

Encyclopaedia Metallum¹ just put out an interesting statement² on AI. Despite being in text, I have to make it an image because of character counts. One of the ironies of living in my SF future apparently.


¹ https://www.metal-archives.com/
² https://www.metal-archives.com/news/view/id/296

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.

Thanks to rapid advancements in generative AI and a glut of training data created by human actors that has been fed into its AI model, Synthesia has been able to produce avatars that are indeed more humanlike and more expressive than their predecessors. The digital clones are better able to match their reactions and intonation to the sentiment of their scripts—acting more upbeat when talking about happy things, for instance, and more serious or sad when talking about unpleasant things. They also do a better job matching facial expressions—the tiny movements that can speak for us without words.

But this technological progress also signals a much larger social and cultural shift. Increasingly, so much of what we see on our screens is generated (or at least tinkered with) by AI, and it is becoming more and more difficult to distinguish what is real from what is not. This threatens our trust in everything we see, which could have very real, very dangerous consequences." https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/

Snowshadow, to news
@Snowshadow@mastodon.social avatar

Ok those of you who defend AI please tell me how this will help humanity!!

Big Brother is here!! I have been telling people don't post selfies!!

"AI detects individual’s political orientation accurately, a threat?

Study showed new threat in the digital age–AI’s ability to predict political orientation from even naturalistic images of individuals."

#News #AI #Politics #HumanRights #FacialRecognition

https://interestingengineering.com/culture/ai-detects-individuals-political-orientation-accurately-a-threat

gimulnautti,
@gimulnautti@mastodon.green avatar

On building a better & more fair information economy in the age of

https://gimulnaut.wordpress.com/2023/01/13/copyright-wars-pt-2-ai-vs-the-public/

tarkowski, to generativeAI
@tarkowski@101010.pl avatar

. henryfarrell@mastodon.social wrote a great essay, outlining a political economy of #generativeAI.

My thinking aligns with his in a lot of ways, and I especially like:

✦ how he takes the "Shoggoth" metaphor, often used to incite moral panic about AGI, and shows that corporations are the real Shoggoths that we should be worried about
✦ how he deploys the "map and territory" metaphor to describe political stakes related to genAI - the struggle is for control of technologies, with the help of which there are increasing attempts to replace maps for real territories
✦ how he notes a reconfiguration of political positions of activists and organizations like Open Future Foundation - and signals the need for a new advocacy agenda based on a good understanding of emergent ways of creating synthetic knowledge and culture, and focused on supporting and protecting human knowledge.

https://www.programmablemutter.com/p/the-political-economy-of-ai

Tevis, to ai
@Tevis@mastodon.social avatar

Someone used to depict a school principal making racist comments about students, prompting a flood of angry messages and the principal's temporary removal.

We are not prepared for the world we have created.

https://www.thebaltimorebanner.com/education/k-12-schools/eric-eiswert-ai-audio-baltimore-county-YBJNJAS6OZEE5OQVF5LFOFYN6M/

eddyizm, to StableDiffusion
@eddyizm@fosstodon.org avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Based on an analysis of 4,500 headline requests (in 900 outputs) from ChatGPT and Bard collected across ten countries, we find that:

  • When prompted to provide the current top news headlines from specific outlets, ChatGPT returned non-news output 52–54% of the time (almost always in the form of an ‘I’m unable to’-style message). Bard did this 95% of the time.
  • For ChatGPT, just 8–10% of requests returned headlines that referred to top stories on the outlet’s homepage at the time. This means that when ChatGPT did return news-like output, the headlines provided did not refer to current top news stories most of the time.
  • Of the remaining requests, around one-third (30%) returned headlines that referred to real, existing stories from the news outlet in question but they were not among the latest top stories, either because they were old or because they were not at the top of the homepage.
  • Around 3% of outputs from ChatGPT contained headlines that referred to real stories that could only be found on the website of a different outlet. The misattribution (but not the story itself) could be considered a form of hallucination. A further 3% were so vague and ambiguous that they could not be matched to existing stories. These outputs could also be considered a form of hallucination.
  • The outputs from ChatGPT are heavily influenced by whether news websites have chosen to block it, and outputs from identical prompts can change over time for reasons that are not clear to users."

https://reutersinstitute.politics.ox.ac.uk/im-unable-how-generative-ai-chatbots-respond-when-asked-latest-news

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The head of Indian IT company Tata Consultancy Services has said artificial intelligence will result in “minimal” need for call centres in as soon as a year, with AI’s rapid advances set to upend a vast industry across Asia and beyond.

K Krithivasan, TCS chief executive, told the Financial Times that while “we have not seen any job reduction” so far, wider adoption of generative AI among multinational clients would overhaul the kind of customer help centres that have created mass employment in countries such as India and the Philippines.

“In an ideal phase, if you ask me, there should be very minimal incoming call centres having incoming calls at all,” he said. “We are in a situation where the technology should be able to predict a call coming and then proactively address the customer’s pain point.”"

https://www.ft.com/content/149681f0-ea71-42b0-b85b-86073354fb73

cassidy, (edited ) to ai
@cassidy@blaede.family avatar

I really like the convention of using ✨ sparkle iconography as an “automagic” motif, e.g. to smart-adjust a photo or to automatically handle some setting. I hate that it has become the defacto iconography for generative AI. 🙁

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "How did Microsoft cram a capability potentially similar to GPT-3.5, which has at least 175 billion parameters, into such a small model? Its researchers found the answer by using carefully curated, high-quality training data they initially pulled from textbooks. "The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data," writes Microsoft. "The model is also further aligned for robustness, safety, and chat format."

Much has been written about the potential environmental impact of AI models and datacenters themselves, including on Ars. With new techniques and research, it's possible that machine learning experts may continue to increase the capability of smaller AI models, replacing the need for larger ones—at least for everyday tasks. That would theoretically not only save money in the long run but also require far less energy in aggregate, dramatically decreasing AI's environmental footprint. AI models like Phi-3 may be a step toward that future if the benchmark results hold up to scrutiny.

Phi-3 is immediately available on Microsoft's cloud service platform Azure, as well as through partnerships with machine learning model platform Hugging Face and Ollama, a framework that allows models to run locally on Macs and PCs."

https://arstechnica.com/information-technology/2024/04/microsofts-phi-3-shows-the-surprising-power-of-small-locally-run-ai-language-models/

br00t4c, to generativeAI
@br00t4c@mastodon.social avatar

STAT+: Generative AI is supposed to save doctors from burnout. New data show it needs more training

https://www.statnews.com/2024/04/25/health-ai-large-language-models-clinical-documentation/?utm_campaign=rss

1br0wn, to ai
@1br0wn@eupolicy.social avatar

‘The head of Indian IT company Tata Consultancy Services has said artificial intelligence will result in “minimal” need for call centres in as soon as a year, with ’s rapid advances set to upend a vast industry across Asia and beyond... its pipeline of projects doubled quarter over quarter to be worth $900mn to the end of March.’ https://www.ft.com/content/149681f0-ea71-42b0-b85b-86073354fb73

br00t4c, to ai
@br00t4c@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "For years, people who have found Google search frustrating have been adding “Reddit” to the end of their search queries. This practice is so common that Google even acknowledged the phenomenon in a post announcing that it will be scraping Reddit posts to train its AI. And so, naturally, there are now services that will poison Reddit threads with AI-generated posts designed to promote products.

A service called ReplyGuy advertises itself as “the AI that plugs your product on Reddit” and which automatically “mentions your product in conversations naturally.” Examples on the site show two different Redditors being controlled by AI posting plugs for a text-to-voice product called “AnySpeech” and a bot writing a long comment about a debt consolidation program called Debt Freedom Now." https://www.404media.co/ai-is-poisoning-reddit-to-promote-products-and-game-google-with-parasite-seo/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "Last week, Microsoft researchers released WizardLM 2, which it claimed is one of the most powerful open source large language models to date. Then it deleted the model from the internet a few hours later because, as The Information reported, it “accidentally missed” required “toxicity testing” before it was released.

However, as first spotted by Memetica, in the short hours before it was taken down, several people downloaded the model and reuploaded it to Github and Hugging Face, meaning that the model Microsoft thought was not ready for public consumption and had to be taken offline, has already spread far and wide, and now effectively can never be removed from the internet.

Microsoft declined to comment for this article.

According to a now deleted post from the developers of WizardLM 2 about its release, the open source model is Microsoft’s “next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent.”" https://www.404media.co/microsoft-deleted-its-llm-because-it-didnt-get-a-safety-test-but-now-its-everywhere/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "5 flaws of the AI Act from the perspective of civic space and the rule of law

  1. Gaps and loopholes can turn prohibitions into empty declarations
  2. AI companies’ self-assessment of risks jeopardises fundamental rights protections
  3. Standards for fundamental rights impact assessments are weak
  4. The use of AI for national security purposes will be a rights-free zone
  5. Civic participation in the implementation and enforcement is not guaranteed"

https://edri.org/our-work/packed-with-loopholes-why-the-ai-act-fails-to-protect-civic-space-and-the-rule-of-law/

br00t4c, to apple
@br00t4c@mastodon.social avatar
echevarian, to generativeAI
@echevarian@genart.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • mdbf
  • osvaldo12
  • thenastyranch
  • InstantRegret
  • Youngstown
  • rosin
  • slotface
  • Durango
  • ngwrru68w68
  • khanakhh
  • kavyap
  • everett
  • DreamBathrooms
  • Leos
  • magazineikmin
  • modclub
  • GTA5RPClips
  • vwfavf
  • tacticalgear
  • ethstaker
  • cubers
  • cisconetworking
  • normalnudes
  • tester
  • anitta
  • provamag3
  • JUstTest
  • All magazines