GW, to microsoft

Times Sues OpenAI, Over The Use Of Its Stories To Train

Artificial intelligence companies scrape information available online, including article

In the suit filed Wednesday in Manhattan federal court, the Times said and Microsoft are advancing their technology through the “unlawful use of The Times’s work to create artificial intelligence products that compete with it” and “threatens The Times’s ability to provide that service.”

https://www.huffpost.com/entry/new-york-times-openai-lawsuit_n_658c76a2e4b03057f5cc417b

marcel, to random German
@marcel@waldvogel.family avatar

Hier ein Versuch der -Idee: Je einen 🧵 für Englisch und Deutsch über jeden meiner Fediverse-Threads.

Initial starte ich mit der Liste der meistgelesenen Artikel von mir. Viel Spass beim !

🔟 Nicht wirklich «Responsible Disclosure»: Die Extraportion Spam über die Festtage (2023-12)
Noch keine zwei Tage alt und schafft es schon in die , wow!

Bitte macht eure Disclosures anders. Danke!
https://waldvogel.family/@marcel/111622567290149119
https://dnip.ch/2023/12/22/nicht-wirklich-responsible-disclosure-die-extraportion-spam-ueber-die-festtage/

marcel,
@marcel@waldvogel.family avatar

3️⃣ KI ist kein Zufall
Verrate ich zu viel, wenn ich sage, dass meine Top-3-Artikel alle vom Thema #KünstlicheIntelligenz handeln? Oh, sorry, habe nichts gesagt! Also: Psst!

An vielen Stellen setzen #Chatbots und #Bildgeneratoren Zufallszahlen ein. Sie sind aber nicht deswegen unzuverlässig. Ein Einblick in die schöne Welt der Zufallszahlen und ihren Einsatz bei #KI. Und wieso man nicht alles in einen Topf werfen darf.
#GenAI #AI #LLM
https://dnip.ch/2023/05/08/ki-ist-kein-zufall/
Mehr zu #KI: https://marcel-waldvogel.ch/ki

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Chatbots ##Energy #Infrastructure #BigTech: "Tech giants such as Google, Microsoft, Amazon, and Meta — so-called hyperscalers — are spending billions of dollars more to build out their own data centers. Jensen Huang, the CEO of the AI-chip giant Nvidia, predicts that in the next four years alone, companies will spend $1 trillion to add to their arsenal of data centers — creating what Blackstone, one of the world's largest asset managers, calls a "once-in-a-generation engine for future growth in data centers."

But that growth may come at a steep cost. At the heart of the data-center boom lies a strange paradox: The more the internet has consumed our lived reality, the easier it's been to ignore the physical infrastructure required to power that reality. Today, the complexity of the large language models being developed by OpenAI, Google, and Meta — and the frenzy to bake those models into everything from Google Search to Facebook stickers to "Harry Potter" fan fiction — is forcing more and more Americans like Schlossberg to confront the high price of our digital addiction.

"People need to know that every picture, every TED Talk, every Instagram, everything they save is going into a concrete box that requires power," Schlossberg said. "It's not free, and it's not ethereal, and it's not fluffy." For all of the fears about AI's brain — that chatbots will steal our jobs or somehow destroy humanity as we know it — it's AI's body that could claim us first."

https://www.businessinsider.in/tech/news/chatbots-came-to-conquer-their-community-the-townsfolk-fought-back-/articleshow/106184887.cms

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Chatbots ##Energy #Infrastructure #BigTech: "Tech giants such as Google, Microsoft, Amazon, and Meta — so-called hyperscalers — are spending billions of dollars more to build out their own data centers. Jensen Huang, the CEO of the AI-chip giant Nvidia, predicts that in the next four years alone, companies will spend $1 trillion to add to their arsenal of data centers — creating what Blackstone, one of the world's largest asset managers, calls a "once-in-a-generation engine for future growth in data centers."

But that growth may come at a steep cost. At the heart of the data-center boom lies a strange paradox: The more the internet has consumed our lived reality, the easier it's been to ignore the physical infrastructure required to power that reality. Today, the complexity of the large language models being developed by OpenAI, Google, and Meta — and the frenzy to bake those models into everything from Google Search to Facebook stickers to "Harry Potter" fan fiction — is forcing more and more Americans like Schlossberg to confront the high price of our digital addiction.

"People need to know that every picture, every TED Talk, every Instagram, everything they save is going into a concrete box that requires power," Schlossberg said. "It's not free, and it's not ethereal, and it's not fluffy." For all of the fears about AI's brain — that chatbots will steal our jobs or somehow destroy humanity as we know it — it's AI's body that could claim us first."

https://www.businessinsider.in/tech/news/chatbots-came-to-conquer-their-community-the-townsfolk-fought-back-/articleshow/106184887.cms

itnewsbot, to ArtificialIntelligence

Q&A: How Athenahealth moved from traditional AI to genAI and ChatGPT - Athenahealth provides software and services for medical groups and health systems arou... - https://www.computerworld.com/article/3711802/qa-how-athenahealth-moved-from-traditional-ai-to-genai-and-chatgpt.html#tk.rss_all #artificialintelligence #emergingtechnology #healthcareindustry #generativeai #itleadership #chatbots

whitemice, to technology
@whitemice@mastodon.social avatar

A fun look at a truly annoying [and revealing] aspect of LLMs: they are more like improv actors than experts. #technology #llms #chatbots #improv
https://nibblestew.blogspot.com/2023/12/ai-silliness-getting-to-no.html

itnewsbot, to ArtificialIntelligence

Q&A: Sedgwick exec lays out 'the baby steps to genAI adoption' - Sedgwick, a third-party insurance claims management provider operating in 80 countries... - https://www.computerworld.com/article/3711780/qa-sedgwick-exec-lays-out-the-baby-steps-to-genai-adoption.html#tk.rss_all #artificialintelligence #emergingtechnology #microsoft365 #generativeai #microsoft #chatbots

ton, to ChatGPT

Save on paid ChatGPT accounts, just find a website that uses it as a marketing chatbot and use theirs! ;)

https://www.zylstra.org/blog/2023/12/marketing-chat-bots-as-generic-chatgpt-access-points/

#chatbots #chatgpt #openai

mimarek, to news

The popularization of AI chatbots has not boosted overall cheating rates in high schools, according to new research from Stanford University.

About 60% to 70% of surveyed students have engaged in cheating behavior in the last month, not higher than before chatbots.

https://edition.cnn.com/2023/12/13/tech/chatgpt-did-not-increase-cheating-in-high-schools/index.html

#News #AI #Chatbots #Cheating #HighSchool #Education #academia #Research @academicchatter

itnewsbot, to ArtificialIntelligence

Microsoft unveils Phi-2, the next of its smaller, more nimble genAI models - Microsoft has announced the next of its suite of smaller, more nimble artificial intel... - https://www.computerworld.com/article/3711701/microsoft-unveils-phi-2-the-next-of-its-smaller-more-nimble-genai-models.html#tk.rss_all #naturallanguageprocessing #artificialintelligence #emergingtechnology #generativeai #microsoft365 #microsoft #chatbots

marcel, to ai German
@marcel@waldvogel.family avatar

Bruce #Schneier befasst sich als (IT-)Sicherheitsexperte schon lange mit dem Thema #Vertrauen. Vor wenigen Tagen erschien sein Essay, ob und wie das mit #KI zusammenpasse:

1️⃣ Vertrauen gibt es in zwei Formen: #Zwischenmenschlich (zwischen Freunden) und #gesellschaftlich (gegenüber Fremden und Organisationen)
2️⃣ Diese zwei funktionieren so unterschiedlich, dass sie immer klar zu trennen sind.

#BruceSchneier #AI #KünstlicheIntelligenz 1/n
https://dnip.ch/2023/12/11/marcel-pendelt-ki-und-vertrauen/

marcel,
@marcel@waldvogel.family avatar

3️⃣ Firmen versuchen sich trotzdem, via Werbung und SoMe als "Freunde" auszugeben
4️⃣ #Chatbots und die #KI-Firmen dahinter vermischen dieses #Vertrauen immer mehr
5️⃣ Damit dies nicht missbraucht wird, braucht es #Regulierung:
➡️ #Transparenz, #Sicherheit, #Durchsetzung, #Strafen bei Verstössen
6️⃣ Wichtiger als die Regulierung der Technik: Die der Personen und Organisationen dahinter
➡️ Treuhänder, echte öffentliche Modelle
#Konsumentenschutz 2/n

Hier die Ganzkurzversion: https://marcel-waldvogel.ch/2023/12/11/ki-und-vertrauen-passt-das-zusammen/

marcel,
@marcel@waldvogel.family avatar

#Chatbots wie #ChatGPT reproduzieren nur statistische Muster zufällig und ohne Plan. Trotzdem sind sie faszinierend und in einigen Bereichen auch hilfreich, z.T. sogar sehr.

Das entbindet uns als Menschen aber nicht davon, die Ausgabe dieser #KI-Modelle zu verifizieren. Genau so, wie wir es auch beim neuen Praktikanten machen würden, der im Fachgebiet, Stil und (Firmen-)Kultur noch kaum Ahnung hat. Und auch nicht sagt, wenn er keine Ahnung hat.
3/n
https://dnip.ch/2023/01/30/wie-funktioniert-eigentlich-chatgpt/

AnthonyBaker, to ai
@AnthonyBaker@mastodon.social avatar

Oh, this is delightful.

Jailbroken AI Chatbots Can Jailbreak Other Chatbots

AI chatbots can convince other chatbots to instruct users how to build bombs and cook meth

https://www.scientificamerican.com/article/jailbroken-ai-chatbots-can-jailbreak-other-chatbots/

#ai #chatbots

thejapantimes, to business
@thejapantimes@mastodon.social avatar

For market analysists, a rush to cash in on the branch of AI that deals with text comprehension is leveraging long-standing ties between university researchers and systematic investors — and opening a new frontier in so-called sentiment analysis. https://www.japantimes.co.jp/business/2023/12/07/tech/wall-street-quants-chatbot-boom-ai/?utm_content=buffer3c774&utm_medium=social&utm_source=mastodon&utm_campaign=bffmstdn #business #tech #wallstreet #openai #chatgpt #chatbots #ai #tech

itnewsbot, to ArtificialIntelligence

Here's why half of developers will soon use AI-augmented software - Generative artificial intelligence (genAI) tools to assist in the creation, testing an... - https://www.computerworld.com/article/3711404/heres-why-half-of-developers-will-soon-use-ai-augmented-software.html#tk.rss_all #artificialintelligence #softwaredevelopment #emergingtechnology #generativeai #chatbots

thejapantimes, to worldnews
@thejapantimes@mastodon.social avatar

The European Union is set to thrash out an agreement on sweeping rules to regulate artificial intelligence, following months of difficult negotiations in particular on how to monitor generative AI tools like ChatGPT. https://www.japantimes.co.jp/news/2023/12/06/world/politics/eu-world-first-ai-law/?utm_content=bufferbcfe9&utm_medium=social&utm_source=mastodon&utm_campaign=bffmstdn #worldnews #politics #eu #europe #tech #ai #chatbots #openai #chatgpt #privacy #surveillance

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ChatBots #Medecine #Healthcare #Algorithms: "In February, San-Francisco-based Doximity, a telehealth and medical professional networking company, rolled out a beta version of its medical chatbot DocsGPT, which was intended to help doctors with multiple tasks including writing discharge instructions for patients, note taking, and responding to other medical-related prompts ranging from answering questions about medical conditions to performing calculations for health-related medical algorithms like measuring kidney function.

However, as I have reported, the app was also engaging in “race-norming” and amplifying race-based medical inaccuracies that could be dangerous to patients who are Black. Although doctors could use it to answer a variety of questions and perform tasks that would impact medical care, the chatbot itself is not classified as a medical device—as doctors aren’t technically supposed to input medically sensitive information (though several doctors and researchers have stated that many still do). As such, companies are free to develop and release these applications without going through a regulatory process that makes sure these apps actually work as intended.

Still, many companies are developing their chatbots and generative artificial intelligence models for integration into health care settings—from medical scribes to diagnostic chatbots—raising broad-ranging concerns over AI regulation and liability. Stanford University data scientist and dermatologist Roxana Daneshjou tells proto.life part of the problem is figuring out if the models even work."

https://proto.life/2023/11/the-urgent-problem-of-regulating-ai-in-medicine/?mc_cid=7d7e0a9d8d

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Chatbots #GPT4 #Eliza #TuringTest: "The experiment involved 652 participants who completed a total of 1,810 sessions, of which 1,405 games were analyzed after excluding certain scenarios like repeated AI games (leading to the expectation of AI model interactions when other humans weren't online) or personal acquaintance between participants and witnesses, who were sometimes sitting in the same room.

Surprisingly, ELIZA, developed in the mid-1960s by computer scientist Joseph Weizenbaum at MIT, scored relatively well during the study, achieving a success rate of 27 percent. GPT-3.5, depending on the prompt, scored a 14 percent success rate, below ELIZA. GPT-4 achieved a success rate of 41 percent, second only to actual humans.

GPT-3.5, the base model behind the free version of ChatGPT, has been conditioned by OpenAI specifically not to present itself as a human, which may partially account for its poor performance. In a post on X, Princeton computer science professor Arvind Narayanan wrote, "Important context about the 'ChatGPT doesn't pass the Turing test' paper. As always, testing behavior doesn't tell us about capability." In a reply, he continued, "ChatGPT is fine-tuned to have a formal tone, not express opinions, etc, which makes it less humanlike. The authors tried to change this with the prompt, but it has limits. The best way to pretend to be a human chatting is to fine-tune on human chat logs.""

https://arstechnica.com/information-technology/2023/12/real-humans-appeared-human-63-of-the-time-in-recent-turing-test-ai-study/

happyborg, to ChatGPT
@happyborg@fosstodon.org avatar

Forget chatbot spam, there's much worse coming.

#ChatGPT style #chatbots are going to fertile ground for cybervillains if we ever manage to beat #ransomware.

Assuming we do, by the time that happens and every organisation has come to rely on its in-house chatbot, after filling it with all their most precious data, I.P., strategy, plans, indiscretions, dodgy deals...

The mother of all insider threats is coming.

Think of your own chatbot as the therapist you can't trust. 🤔

estelle, to random

The terrible human toll in Gaza has many causes.
A chilling investigation by +972 highlights efficiency:

  1. An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”

  2. An AI outputs "100 targets a day". Like a factory with murder delivery:

"According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"

  1. "The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."

🧶

estelle,

In 2019, the Israeli army created a special unit to create targets with the help of generative AI. Its objective: volume, volume, volume.
The effects on civilians (harm, suffering, death) are not a priority: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

#lawful #compliance #governance #anthropology #tech #techCulture #engineering #engineers #ethics @ethics #sociology @sociology #bias #AI #AITech #aiEthics #generativeAI #chatBots @ai @psychology @socialpsych #StochasticParrots @dataGovernance @data

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Chatbots #ChatGPT #SentimentAnalysis #Surveillance #SocialMedia: "In a presentation at the Milipol homeland security conference in Paris on Tuesday, online surveillance company Social Links demonstrated ChatGPT performing “sentiment analysis," where the AI assesses the mood of social media users or can highlight commonly-discussed topics amongst a group. That can then help predict whether online activity will spill over into physical violence and require law enforcement action.

Founded by Russian entrepreneur Andrey Kulikov in 2017, Social Links now has offices in the Netherlands and New York; previously, Meta dubbed the company a spyware vendor in late 2022, banning 3,700 Facebook and Instagram accounts it allegedly used to repeatedly scrape the social sites. It denies any link to those accounts and the Meta claim hasn’t harmed its reported growth: company sales executive Rob Billington said the company had more than 500 customers, half of which were based in Europe, with just over 100 in North America. That Social Links is using ChatGPT shows how OpenAI’s breakout tool of 2023 can empower a surveillance industry keen to tout artificial intelligence as a tool for public safety."

https://www.forbes.com/sites/thomasbrewster/2023/11/16/chatgpt-becomes-a-social-media-spy-assistant/?sh=15b8f1b65cf6

remixtures, to uk Portuguese
@remixtures@tldr.nettime.org avatar

#UK #AI #GenerativeAI #Chatbots #ChatGPT #Teens: "Teenagers and children are far more likely than adults to have used generative AI, according to Ofcom’s latest research into the UK’s online habits.

The regulator said its latest study showed that four in five (79%) online teenagers aged 13-17 now use generative AI tools – which includes chatbots such as ChatGPT, with 40% of those aged 7-12 also using the technology.

Generative AI is capable of creating text, images or other media using learned behaviour.

In contrast, Ofcom said only 31% of adult internet users had used the technology – and among the 69% who had never used it, 24% did not know what it was."

https://www.independent.co.uk/tech/teenagers-ofcom-chatgpt-facebook-youtube-b2454508.html

remixtures, to OpenAI Portuguese
@remixtures@tldr.nettime.org avatar

: "OpenAI empezó como OpenAI Incorporated, una organización sin ánimo de lucro dedicada a promover el desarrollo de una inteligencia artificial (IA) segura, con el enfoque en la investigación abierta y la colaboración con empresas y universidades para abordar los desafíos éticos y de seguridad asociados a su desarrollo. Después llegó OpenAI Limited Partnership, la empresa con ánimo de lucro que reventó el mercado con un producto llamado ChatGPT. Esta última es la que cerró el código, subió la apuesta, y conduce a toda mecha hacia el horizonte utópico de una inteligencia artificial general.

El pasado día 6, esa OpenAI LP celebró su primera conferencia de desarrolladores, la verdadera puesta de largo de un coloso del sector. Sam presentó, en su mejor estilo Steve Jobs, una avalancha de propuestas comerciales, incluyendo GPT-4 Turbo, chatbots personalizados y hasta una App Store. Quiere crecer todo lo que pueda mientras pueda, busca la consolidación. Pero sigue controlada por la ONG idealista a través de la junta directiva que despidió a Sam. Hace tiempo que sus dos visiones se han vuelto incompatibles. Algo tenía que ceder."

https://elpais.com/opinion/2023-11-20/duelo-en-la-cumbre-de-la-inteligencia-artificial.html

remixtures, to OpenAI Portuguese
@remixtures@tldr.nettime.org avatar

#OpenAI #ChatGPT #Chatbots #AI #GenerativeAI: "In conversations between The Atlantic and 10 current and former employees at OpenAI, a picture emerged of a transformation at the company that created an unsustainable division among leadership. (We agreed not to name any of the employees—all told us they fear repercussions for speaking candidly to the press about OpenAI’s inner workings.) Together, their accounts illustrate how the pressure on the for-profit arm to commercialize grew by the day, and clashed with the company’s stated mission, until everything came to a head with ChatGPT and other product launches that rapidly followed. “After ChatGPT, there was a clear path to revenue and profit,” one source told us. “You could no longer make a case for being an idealistic research lab. There were customers looking to be served here and now.”

We still do not know exactly why Altman was fired, nor do we fully understand what his future is. Altman, who visited OpenAI’s headquarters in San Francisco this afternoon to discuss a possible deal, has not responded to our requests for comment. The board announced on Friday that “a deliberative review process” had found “he was not consistently candid in his communications with the board,” leading it to lose confidence in his ability to be OpenAI’s CEO. An internal memo from the COO to employees, confirmed by an OpenAI spokesperson, subsequently said that the firing had resulted from a "breakdown in communications” between Altman and the board rather than “malfeasance or anything related to our financial, business, safety, or security/privacy practices.” But no concrete, specific details have been given. What we do know is that the past year at OpenAI was chaotic and defined largely by a stark divide in the company’s direction."

https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • mdbf
  • osvaldo12
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • Durango
  • slotface
  • modclub
  • kavyap
  • DreamBathrooms
  • anitta
  • khanakhh
  • everett
  • ethstaker
  • cisconetworking
  • Leos
  • cubers
  • normalnudes
  • ngwrru68w68
  • tacticalgear
  • tester
  • provamag3
  • megavids
  • lostlight
  • All magazines