jonny, to random
@jonny@neuromatch.social avatar

Glad to formally release my latest work - Surveillance Graphs: Vulgarity and Cloud Orthodoxy in Linked Data Infrastructures.

web: https://jon-e.net/surveillance-graphs
hcommons: https://doi.org/10.17613/syv8-cp10

A bit of an overview and then I'll get into some of the more specific arguments in a thread:

This piece is in three parts:

First I trace the mutation of the liberatory ambitions of the #SemanticWeb into #KnowledgeGraphs, an underappreciated component in the architecture of #SurveillanceCapitalism. This mutation plays out against the backdrop of the broader platform capture of the web, rendering us as consumer-users of information services rather than empowered people communicating over informational protocols.

I then show how this platform logic influences two contemporary public information infrastructure projects: the NIH's Biomedical Data Translator and the NSF's Open Knowledge Network. I argue that projects like these, while well intentioned, demonstrate the fundamental limitations of platformatized public infrastructure and create new capacities for harm by their enmeshment in and inevitable capture by information conglomerates. The dream of a seamless "knowledge graph of everything" is unlikely to deliver on the utopian promises made by techno-solutionists, but they do create new opportunities for algorithmic oppression -- automated conversion therapy, predictive policing, abuse of bureacracy in "smart cities," etc. Given the framing of corporate knowledge graphs, these projects are poised to create facilitating technologies (that the info conglomerates write about needing themselves) for a new kind of interoperable corporate data infrastructure, where a gradient of public to private information is traded between "open" and quasi-proprietary knowledge graphs to power derivative platforms and services.

When approaching "AI" from the perspective of the semantic web and knowledge graphs, it becomes apparent that the new generation of #LLMs are intended to serve as interfaces to knowledge graphs. These "augmented language models" are joint systems that combine a language model as a means of interacting with some underlying knowledge graph, integrated in multiple places in the computing ecosystem: eg. mobile apps, assistants, search, and enterprise platforms. I concretize and extend prior criticism about the capacity for LLMs to concentrate power by capturing access to information in increasingly isolated platforms and expand surveillance by creating the demand for extended personalized data graphs across multiple systems from home surveillance to your workplace, medical, and governmental data.

I pose Vulgar Linked Data as an alternative to the infrastructural pattern I call the Cloud Orthodoxy: rather than platforms operated by an informational priesthood, reorienting our public infrastructure efforts to support vernacular expression across heterogeneous #p2p mediums. This piece extends a prior work of mine: Decentralized Infrastructure for (Neuro)science) which has more complete draft of what that might look like.

(I don't think you can pre-write threads on masto, so i'll post some thoughts as I write them under this) /1

#SurveillanceGraphs

jonny,
@jonny@neuromatch.social avatar

Though the aims of the project themselves dip into the colonial dream of the great graph of everything, the true harms for both of these projects come what happens with the technologies after they end. Many information conglomerates are poised to pounce on the infrastructures built by the NIH and NSF projects, stepping in to integrate their work or buy the startups that spin off from them.

The NSF's Open Knowledge Network is much more explicitly bound to the national security and economic interests of the US federal government, intended to provide the infrastructure to power an "AI-driven future." That project is at a much earlier stage, but in its early sketches it promises to take the same patterns of knowledge-graphs plus algorithmic platforms and apply them to government, law enforcement, and a broad range of other domains.

This pattern of public graphs for private profits is well underway at existing companies like Google, and I assume the academics and engineers in both of these projects are operating with the best of intentions and perhaps playing a role they are unaware of.

/6

#SurveillanceGraphs

ronent,

@tkuhn @bengo @photocyte @jonny @knowledgepixels
Love this thread synthesizing so many cool directions! Adding our own perspective to the mix (along with @InferenceActive on birdsite) :
https://osf.io/preprints/metaarxiv/9nb3u/
TL;DR
Trying to make the case that attention/sensemaking data (eg what researchers are attending to and their assessments of content) are an important kind of nano-scientific knowledge that gets extracted by platforms instead of helping to power content curation and discovery networks

upol, to ChatGPT
@upol@hci.social avatar

Is this another version of the "synthetic users" argument?

Just don't understand the jump from performing well to...well can we just replace humans?

But y tho? 🫠😵‍💫

shiwali,

@upol This example is a great illustration for different use cases of statistics. The graph shown that human actually vary to some significant extent. Co-relation measures when human increase the rating, ChatGPT does too. But the heterogeneity in human answers matter - because disagreements are meaningful.

shiwali,

@upol Fully agree. I don't get why we would want to replace humans, ESPECIALLY for these cases.

cassidy, to ai
@cassidy@blaede.family avatar

I get that it’s hot right now, but man, the user experience of LLMs being this bot you type text to seems like a huge step backwards compared to just integrating these AI features natively into products.

#GoogleIO #LLMs #AI

cassidy,
@cassidy@blaede.family avatar

This AI stuff is a snoozefest.

I 👏 DON’T 👏 CARE 👏

Wake me up when you talk about Nest and Pixel.

#GoogleIO

cassidy,
@cassidy@blaede.family avatar

I don’t use Google search anymore, but man, they really are taking over the entire search engine with their own AI model instead of showing results. I feel like web publishers are gonna be pissed.

#GoogleIO

williamgunn, to ai
@williamgunn@mastodon.social avatar

A gallery of ways #LLMs can be used for evil
https://llmsaregoinggreat.com/evil

#ai #artificialintelligence

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

#ChatGPT is powered by a hidden army of contractors making $15 per hour. For a technology that is supposedly threatening many jobs, #AI tools require a large workforce to ensure accuracy and trust through a human feedback loop, something #Google has failed to do with #Bard.

#llm #llms #machinelearning #chatbots #ethicalai

https://www.techspot.com/news/98600-chatgpt-powered-hidden-army-contractors-making-15-hour.html

ErikJonker,
@ErikJonker@mastodon.social avatar

@Jigsaw_You
... the human training part is one of the reasons why it's succesful , personally I don't think AI will replace jobs, only change them as earlier waves in IT / automation

Jigsaw_You, to random Dutch
@Jigsaw_You@mastodon.nl avatar

“There is a world in which generative #AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own” - Naomi Klein

#artificialintelligence #machinelearning #llm #llms

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

Jigsaw_You, to opensource Dutch
@Jigsaw_You@mastodon.nl avatar

“The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While #AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist.”

#artificialintelligence #machinlearning #llm #llms #opensource #wikipedia

https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart

Jigsaw_You,
@Jigsaw_You@mastodon.nl avatar
ErikJonker,
@ErikJonker@mastodon.social avatar

@Jigsaw_You ... very good example, i like their positive but responsible approach " The university encourages the use of AI chatbots to support teaching and learning and develop
students’ learning and working skills. The key aspects of using them are purposefulness, ethics,
transparency, and critical approach."

jchyip, to random
@jchyip@mastodon.online avatar

Wondering how well #PromptEngineering patterns hold across #LLMs or are they more like LLM-specific idioms?

bigdata, to random

🆕 Newsletter 🚀 Building software systems with LLMs and other Generative Models will primarily involve writing text instructions → I explore the fascinating world of prompt engineering, LLMs & #NLProc pipelines.
#MachineLearning #GenerativeAI #LLMs
🔗 https://gradientflow.substack.com/p/the-future-of-prompt-engineering

jchyip, to random
@jchyip@mastodon.online avatar

It's amusing (interesting?) that #LLMs suffer from #RecencyBias.

ppatel, to accessibility
@ppatel@mstdn.social avatar

Q&A with Vint Cerf, chief internet evangelist at #Google and recipient of IEEE's Medal of Honor, on how Google has changed since 2005, the hazards of #LLMs, #accessibility for disabled people, and more.

https://techcrunch.com/2023/05/05/vint-cerf-on-the-exhilarating-mix-of-thrill-and-hazard-at-the-frontiers-of-tech/

remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #LLMs #Chatbots #ChatGPT: "Do you think the public has been too credulous about ChatGPT?

It’s not just the public. Some of your friends at your newspaper have been a bit credulous. In my book, “Rebooting A.I.,” we talked about the Eliza effect — we called it the “gullibility gap.” In the mid-1960s, Joseph Weizenbaum wrote this primitive piece of software called Eliza, and some people started spilling their guts to it. It was set up as a psychotherapist, and it was doing keyword matching. It didn’t know what it was talking about, but it wrote text, and people didn’t understand that a machine could write text and not know what it was talking about. The same thing is happening right now. It is very easy for human beings to attribute awareness to things that don’t have it. The cleverest thing that OpenAI did was to have GPT type its answers out one character at a time — made it look like a person was doing it. That adds to the illusion. It is sucking people in and making them believe that there’s a there there that isn’t there. That’s dangerous. We saw the Jonathan Turley incident, when it made up sexual harassment charges. You have to remember, these systems don’t understand what they’re reading. They’re collecting statistics about the relations between words. If everybody looked at these systems and said, “It’s kind of a neat party trick, but haha, it’s not real,” it wouldn’t be so disconcerting. But people believe it because it’s a search engine. It’s from Microsoft. We trust Microsoft. Combine that human overattribution with the reality that these systems don’t know what they’re talking about and are error-prone, and you have a problem."

https://www.nytimes.com/interactive/2023/05/02/magazine/ai-gary-marcus.html

Jigsaw_You, to opensource Dutch
@Jigsaw_You@mastodon.nl avatar
remixtures, to random Portuguese
@remixtures@tldr.nettime.org avatar

#Video #TV #Streaming #Hollywood #Screenwriters #AI #LLMs:"Outside of the Netflix headquarters in New York City on Wednesday, hundreds of members of the Writer’s Guild of America (WGA) marched for a better contract on the second day of the writer's strike. They were there to communicate a clear message: Writers refuse to be replaced by AI.

Signs showcased slogans such as “Writers Generate All of it,” “Don’t Let ChatGPT Write ‘Yellowstone’,” “I Told ChatGPT To Make A Sign and It Sucked,” and “Don’t Uber Writing.” These signs referred to the unprecedented “AI” category in the guild’s proposal in which they asked to regulate the use of AI on union projects but were met with refusal from studios. Writers are seeking pay for episodes on streaming platforms, and to not have their work devalued and turned into gig labor due to the use of text-generating AI programs to write dialog."

https://www.vice.com/en/article/5d9gkq/striking-writers-are-on-the-front-line-of-a-battle-between-ai-and-workers

Jigsaw_You, to opensource Dutch
@Jigsaw_You@mastodon.nl avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines