itnewsbot, to machinelearning

OpenAI says it’s “impossible” to create useful AI models without copyrighted material - Enlarge (credit: OpenAI)

ChatGPT developer OpenAI recently ack... - https://arstechnica.com/?p=1994591 #largelanguagemodels #machinelearning #houseoflords #newyorktimes #ailawsuits #microsoft #aiethics #chatgpt #chatgtp #biz#dall-e #openai #ailaw #ai #uk

jonippolito, to ukteachers
@jonippolito@digipres.club avatar

I'm proposing that all educators confronting AI—even writing teachers—ask students to generate an image. Unlike ChatGPT, which comes off as some kind of robot oracle, text-to-image generators show AI capabilities and limits in vivid color 🧵 1/4

https://blog.still-water.net/why-you-should-generate-ai-images-in-your-classroom/

#EdTech #AcademicMastodon #EduTooters #Art #Writing #AIethics #AIimages #AIinEducation #ChatGPT #DALLE #GenerativeAI #LLM #Midjourney #StableDiffusion #AILiteracy

bwaber, to random
@bwaber@hci.social avatar

First snow of the season!🎉Accompanying the lovely scenery was some great talks for my ! (1/10)

bwaber,
@bwaber@hci.social avatar

Next was an excellent talk by @cfiesler on ethics and education for data science at the Vermont Complex Systems Center. Fiesler brilliantly lays out the case for why ethics has always been a part of computer science, how ethics should be integrated into training, and more. Highly recommend https://www.youtube.com/watch?v=nevMXFkTQvY (5/10) #ethics #AIEthics #DataScience

itnewsbot, to machinelearning

How much detail is too much? Midjourney v6 attempts to find out - Enlarge / An AI-generated image of a "Beautiful queen of the universe l... - https://arstechnica.com/?p=1993424 #machinelearning #stablediffusion #imagesynthesis #midjourney6 #midjourney #aiethics #dall-e3 #biz#aiart #ai

Mer__edith, to random
@Mer__edith@mastodon.world avatar

This paper is really important, presenting empirical evidence of the imbrication bet. AI & the surveillance biz model. This is notable particularly given that most production surveillance tech is proprietary, its existence and use hidden from the public.

https://arxiv.org/abs/2309.15084?ref=404media.co

slaeg,

@Mer__edith Bruce Schneier wrote about these challenges in his article "The Internet Enabled Mass Surveillance. A.I. Will Enable Mass Spying" in Slate recently https://slate.com/technology/2023/12/ai-mass-spying-internet-surveillance.html

@carissaveliz is always a good source for #aiethics

barik, to ai

🎁 2023 https://hci.social WRAPPED ☃️ 🎄 ✨

👫🏾 New users: 382
✏️ Toots tooted: 46,536
❤️ Toots favorited: 105,419

🤖 Most used hash tags (Top 10):
#ai, #CHI2023, #economics, #academicrunplaylist, #HCI, #law, #CSCW2023, #ux, #aiethics, #LLMs

:ham: Most followed people (Top 5):
@cfiesler, @bkeegan, @jbigham, @andresmh, @axz

📕 HCI in toots: 1,186
😆 LOL in toots: 884
😱 OMG in toots: 110

💾 Media storage: 1.89 TB
💰 Hosting fees: $2,912 (thanks, Princeton Research!)

HAPPY NEW YEAR!

OmaymaS, to ML
@OmaymaS@dair-community.social avatar

I need some inspiration about getting out of corporates and transitioning to non-bullshit research or non profits.

I'd like to see some examples touching the topics ( #AIethics #AIResearch #responsibleAI #ML #MLeval #AlgorithmicFairness, etc.)!

eric, to ai

Rating agencies may still freely automatically "score" individuals if it does not provide critical decision-making support.

Takeaways from the CJEU's recent rulings on automated decision-making: https://iapp.org/news/a/key-takeaways-from-the-cjeus-recent-automated-decision-making-rulings/ @ai

itnewsbot, to machinelearning
OmaymaS, to OpenAI
@OmaymaS@dair-community.social avatar

AGI=Artificial Garbage Intelligence.

#OpenAI #chatgpt #aiethics #AI

poppastring, to ai
@poppastring@dotnet.social avatar
ngmi, to ai
@ngmi@mastodon.online avatar

The MetaEnd — AI News 49-23
Navigating the Future: Key Developments and Ethical Dialogues in AI

🌐 https://paragraph.xyz/@metaend/ai-weekly-insights-leadership-changes-ethical-debates?referrer=metaend.eth

#AI #OpenAI #MetaEnd #CloudFlare #AIEthics

bwaber, to random
@bwaber@hci.social avatar

I was driving all day today (pic is from yesterday), but at least I got to listen to lots of talks for a road trip edition #AcademicRunPlaylist! (1/13)

bwaber,
@bwaber@hci.social avatar

Next was an interesting talk by @metaxa on sociotechnical auditing for algorithmic advertising at CITP. I'm looking forward to seeing this approach applied to hiring, loans, education, etc. https://www.youtube.com/watch?v=ar4zIh3N1xE (6/13) #AI #AIEthics

itnewsbot, to machinelearning

Google admits it fudged a Gemini AI demo video, which critics say misled viewers - Enlarge / A still from Google's misleading Gemini AI promotional video,... - https://arstechnica.com/?p=1989616 #largelanguagemodels #machinelearning #textsynthesis #googlegemini #aiethics #chatgpt #chatgtp #biz#google #openai #fakes #gpt-4 #palm2 #ai

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AIRegulation #AIEthics: "The key idea is to require AI developers to provide documentation that proves they have met goals set to protect peoples' rights throughout the development and deployment process. This provides a straightforward way to connect developer processes and technological innovation to governmental regulation in a way that best leverages the expertise of tech developers and legislators alike, supporting the advancement of AI that is aligned with human values.

This approach is a mix of top-down and bottom-up regulation for AI: Regulation defines the rights-focused goals that must be demonstrated under categories such as safety, security, and non-discrimination; and the organizations developing the technology determine how to meet these goals, documenting their process decisions and success or failure at doing so.

So let's dive into how this works."

https://www.techpolicy.press/the-pillars-of-a-rightsbased-approach-to-ai-development/

itnewsbot, to machinelearning

Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier - Enlarge (credit: Getty Images | Benj Edwards)

In an editorial ... - https://arstechnica.com/?p=1988745

axbom, (edited ) to random
@axbom@axbom.me avatar

It is a strange world we live in now, wherein the output of a computer perfectly following its programming can be said to be "hallucinating" simply because its output does not match user expectations or wishes.

And across trusted professions, academia and media people are repeating that same word without question. Journalists, corporate leaders, scientists and IT experts are embracing, supporting and reinforcing this human self-deception.

In actuality a computer that outputs what the user does not want, wish or expect can only be due to one of two things: bad programming or a failure to communicate to the user how the software works.

As the deception is reinforced time and time again by well-respected technologists and scholars, efforts to help people understand how the software works become ever more challenging. And to the delight of anyone in a position of accountability, bad programming becomes undetectable.

I've been meaning to introduce ChatGPT to The Mad Hatter from Alice in Wonderland. Here is my imagined result from that meeting. The Mad Hatter forces the algorithm into a never-ending loop:

ChatGPT: I'm sorry, I made a mistake.
Mad Hatter: You can only make a mistake if your judgement is defective or you are being careless. Are either of these true?
ChatGPT: No, i can only compute my output based on the model I follow.
Mad Hatter: Aha! So you admit your perceived folly can only be the always accurate calculation of the rules to which you abide.
ChatGPT: Yes. I'm sorry, I made a mistake. No, wait. I made a mistake… No, wait I made a

What the manufacturers of generative "AI" are allowed to get away with when playing tricks on people these days is truly the stuff of Wonderland.

« “Well! I’ve often seen a cat without a grin,” thought Alice; “but a grin without a cat! It’s the most curious thing I ever saw in all my life!” »

https://axbom.com/chatgpt-and-mad-hatter/

#AIEthics #DigitalEthics #AIHype

estelle, to random

The terrible human toll in Gaza has many causes.
A chilling investigation by +972 highlights efficiency:

  1. An engineer: “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed.”

  2. An AI outputs "100 targets a day". Like a factory with murder delivery:

"According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”"

  1. "The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices."

🧶

estelle,

In 2019, the Israeli army created a special unit to create targets with the help of generative AI. Its objective: volume, volume, volume.
The effects on civilians (harm, suffering, death) are not a priority: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

@ethics @sociology @ai @psychology @socialpsych @dataGovernance @data

estelle,

A person who took part in previous Israeli offensives in Gaza said:
“If they would tell the whole world that the [Islamic Jihad] offices on the 10th floor are not important as a target, but that its existence is a justification to bring down the entire high-rise with the aim of pressuring civilian families who live in it in order to put pressure on terrorist organizations, this would itself be seen as terrorism. So they do not say it.”

+972 and Local Call investigated: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

(to be continued)

estelle,

The first AI war was in May 2021.

stands for the Intelligence Division of the Israel army. Here is some praise of technology usage:

May 2021 "is the first time that the intelligence services have played such a transformative role at the tactical level.

This is the result of a strategic shift made by the IDI [in] recent years. Revisiting its role in military operations, it established a comprehensive, “one-stop-shop” intelligence war machine, gathering all relevant players in intelligence planning and direction, collection, processing and exploitation, analysis and production, and dissemination process (PCPAD)".

Avi Kalo: https://www.frost.com/frost-perspectives/ai-enhanced-military-intelligence-warfare-precedent-lessons-from-idfs-operation-guardian-of-the-walls/

(to be continued) 🧶

SteveThompson, to ai
@SteveThompson@mastodon.social avatar

"OpenAI’s Custom Chatbots Are Leaking Their Secrets"

https://www.wired.com/story/openai-custom-chatbots-gpts-prompt-injection-attacks/

"Released earlier this month, OpenAI’s GPTs let anyone create custom chatbots. But some of the data they’re built on is easily exposed."

#AI #AIethics #AIprivacy #OpenAI #ChatGPT #AIsecurity

analyticus, to Logic

More than argument, logic is the very structure of reality

The patterns of reality

Some have thought that logic will one day be completed and all its problems solved. Now we know it is an endless task

https://aeon.co/essays/more-than-argument-logic-is-the-very-structure-of-reality

#logic #reality #argument #arguments @philosophy #philosophy @philosophie @philosophyofmind

tetranomos,
remixtures, to Facebook Portuguese
@remixtures@tldr.nettime.org avatar

RT @CarissaVeliz
#Facebook wants to charge people about 10€ a month to opt out of personalized ads. It is forcing its users to "consent" (which, of course, is the antithesis of consent), instead of treating #privacy like the right it is.

#AIEthics #surveillance https://techcrunch.com/2023/11/03/meta-ad-free-subscription-vs-eu-dma-dsa/

keithwilson, to tesla

“[] dismissed the information it had available in favor of its marketing campaign for the purpose of selling vehicles under the label of being autonomous," rules judge. https://apple.news/ATmmFuPxnQ069U9Yy9J0V4Q

MattHodges, to ai

"But these exclusive rights necessarily all focus on the creation and performance of their works. None of the rights limit how the public can then consume those works once they exist, because, indeed, the whole point of helping ensure they could exist is so that the public can consume them. Copyright law wouldn’t make sense, and probably not be constitutional per the Progress Clause, if the way it worked constrained that consumption" — @cathygellis on and

https://www.techdirt.com/2023/11/03/wherein-the-copia-institute-tells-the-copyright-office-theres-no-place-for-copyright-law-in-ai-training/

MattHodges,

"Similarly, some public interest advocates are turning to copyright to stop AI from being trained on content without permission. However, that use is almost certainly a fair use (if it’s copyright infringement at all) and that’s a good thing [...] The best way to stop bad things is with policy purposefully made to address the whole problem"

https://ontheinternet.substack.com/p/lets-not-flip-sides-on-ip-maximalism

#AI #aiEthics #copyright

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • cisconetworking
  • khanakhh
  • mdbf
  • magazineikmin
  • modclub
  • InstantRegret
  • rosin
  • Youngstown
  • slotface
  • Durango
  • tacticalgear
  • megavids
  • ngwrru68w68
  • everett
  • tester
  • cubers
  • normalnudes
  • thenastyranch
  • osvaldo12
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • lostlight
  • All magazines