spocko, to random
@spocko@mastodon.online avatar
remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #DSA #Algorithms #ContentModeration: "The European Union’s Digital Services Act, which will eventually apply to any online service provider, will take effect for very large online platforms with more than 45 million users. Requirements under the law include a ban on targeting users with ads based on sensitive data, transparency requirements about how platforms’ algorithms work, and new liability obligations for illegal content such as hate speech and bans on deceptive design patterns.

The regulations are already shaping up to have a significant impact on how American tech companies treat user data in Europe. The DSA prohibits large tech companies from targeting advertising using sensitive data such as sexual orientation and entirely prohibits targeted ads against children.

Sensitive data as defined in the DSA refers to a broad range of attributes, including sexual orientation, religion, health history and political persuasion. “Just eliminating this type of data from the profiling of users for targeted advertising is going to be a very difficult task, regardless of the size of the company,” Gabriela Zanfir-Fortuna, vice president for global privacy at the Future of Privacy Forum, told CyberScoop."

https://cyberscoop.com/eu-dsa-american-tech-firms/

senficon, to Instagram
@senficon@ohai.social avatar

I‘m skeptical of calls for perfect #contentmoderation. There is a trade-off between under- and over-blocking. But there is no excuse for #instagram failing to block tons of crime ads that can be found as easily as doing a full text search for “t.me” in its ad library, then throttling the media report that breaks the story. https://www.404media.co/instagram-throttles-404-media-investigation-into-drug-ads-on-instagram-continues-to-let-people-advertise-drugs/ /ht @jasonkoebler

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #ContentModeration #Instagram #Meta #Ads: "Instagram limited the reach of a 404 Media investigation into ads for drugs, guns, counterfeit money, hacked credit cards, and other illegal content on the platform within hours of us posting it. Instagram said it did this because the content, which was about Instagram’s content it failed to moderate on its own platform, didn’t follow its “Recommendation Guidelines.” Later that evening, while that post was being throttled, I got an ad for “MDMA,” and Meta’s ad library is still full of illegal content that can be found within seconds.

This means Meta continues to take money from people blatantly advertising drugs on the platform while limiting the reach of reporting about that content moderation failure. Instagram's Recommendation Guidelines limit the reach of posts that "promotes the use of certain regulated products such as tobacco or vaping products, adult products and services, or pharmaceutical drugs.""

https://www.404media.co/instagram-throttles-404-media-investigation-into-drug-ads-on-instagram-continues-to-let-people-advertise-drugs/

spocko, to VintageOSes
@spocko@mastodon.online avatar

I agree with Glenn Kirschner, "It's time to detain Trump pending trial."
Trump physically in jail means he won't personally be able to threaten witnesses during a rally or in live jailhouse interviews.
#Trump's use of threats works for him politically. If we can't stop them completely, we need a way to limit them.
That includes requiring social media companies to enforce their own #TOS on threats.
#TruthSocial #ElonMusk #X #ContentModeration
https://www.spockosbrain.com/2023/08/22/will-trumps-threats-on-social-media-send-him-to-jail/

communitysignal, to trustandsafety
@communitysignal@mastodon.social avatar

“[The] capacity for outsiders to gain voice and win is a really valuable part of the internet. At the same time, what we’ve seen both in politics and media is that the absence of gatekeepers has this real downside.” –@owasow

▶️https://www.communitysignal.com/blackplanets-founder-on-building-impactful-platforms-and-communities/

#OnlineCommunities #CommunityManagement #CommunityManager #CMGR #ContentModeration #TrustAndSafety #SocialMedia #CommunitySignal

itnewsbot, to FreeSpeech
@itnewsbot@schleuss.online avatar

The Kids Online Safety Act isn’t all right, critics say - Enlarge (credit: Aurich Lawson | Getty Images)

Debate continue... - https://arstechnica.com/?p=1960313

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

"The petition filed by the moderators relates to a contract between OpenAI and Sama – a data annotation services company headquartered in California that employs content moderators around the world. While employed by Sama in 2021 and 2022 in Nairobi to review content for OpenAI, the content moderators allege, they suffered psychological trauma, low pay and abrupt dismissal.

The 51 moderators in Nairobi working on Sama’s OpenAI account were tasked with reviewing texts, and some images, many depicting graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest, the petitioners say.

The moderators say they weren’t adequately warned about the brutality of some of the text and images they would be tasked with reviewing, and were offered no or inadequate psychological support. Workers were paid between $1.46 and $3.74 an hour, according to a Sama spokesperson."

https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #Google #Search #SearchEngines #ContentModeration #PoliticalEconomy #AI #GenerativeAI: "In this piece, which frames the special issue, “The State of Google Critique and Intervention,” we provide an overview of research focusing on Google as an object of critical study, fleshing out the European interventions that actively attempt to address its dominance. The article begins by mapping out key areas of articulating a Google critique, from the initial focus on ranking and profiling to the subsequent scrutiny of user exploitation and competitive imbalance. As such, it situates the contributions to this special issue concerning search engine bias and discrimination, the ethics of Google Autocomplete, Google's content moderation, the commodification of engine audiences and the political economy of technical systems in a broader history of Google criticism. It then proceeds to contextualize the European developments that put forward alternatives and draws attention to legislative efforts to curb the influence of big tech. We conclude by identifying a few avenues for continued critical study, such as Google's infrastructural bundling of generative artificial intelligence with existing products, to emphasize the importance of intervention in the future."

https://journals.sagepub.com/doi/10.1177/20539517231191528

remixtures, (edited ) to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Twitter #HateSpeech #ContentModeration: "Twitter — now called X — is threatening to sue the Center for Countering Digital Hate (CCDH) for its research into hate speech on Twitter that the company claims is driving away advertisers. In a letter to the CCDH, which was first reported by The New York Times, Twitter lawyer Alex Spiro claims that the CCDH “regularly posts articles making inflammatory, outrageous, and false or misleading assertions” about Twitter with the goal of harming its reputation.

The CCDH is an organization that aims to hold social media companies accountable for the spread of hateful material online. Since Elon Musk’s takeover of Twitter, the CCDH has published numerous reports that suggest the platform is failing to protect users from hate speech. In its most recent study, the CCDH found that Twitter doesn’t take action against 99 percent of hate speech posted by paid subscribers to Twitter Blue."

https://www.theverge.com/2023/7/31/23813869/twitter-x-ccdh-lawsuit-elon-musk-anti-hate-claims

patrickokeefe, to trustandsafety
@patrickokeefe@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #ChatGPT #Kenya #ContentModeration: "The company used the categorized passages to build an AI safety filter that it would ultimately deploy to constrain ChatGPT from exposing its tens of millions of users to similar content.

“My experience in those four months was the worst experience I’ve ever had in working in a company,” Alex Kairu, one of the Kenya workers, said in an interview.

OpenAI marshaled a sprawling global pipeline of specialized human labor for over two years to enable its most cutting-edge AI technologies to exist, the documents show. Much of this work was benign, for instance, teaching ChatGPT to be an engaging conversationalist or witty lyricist. AI researchers and engineers say such human input will continue to be essential as OpenAI and other companies hone the technology."

https://www.wsj.com/articles/chatgpt-openai-content-abusive-sexually-explicit-harassment-kenya-workers-on-human-workers-cf191483

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Algorithms #ContentModeration #RecommendationEngines: "Six observations on ranking by engagement:

  1. Internet platforms rank content primarily by the predicted probability of engagement.
  2. Platforms rank by engagement because it increases user retention.
  3. Engagement is negatively related to quality.
  4. Sensitive content is often both engaging and retentive.
  5. Sensitive content is often preferred by users.
  6. Platforms don’t want sensitive content but don’t want to be seen to be removing it."

https://tecunningham.github.io/posts/2023-04-28-ranking-by-engagement.html

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Mastodon #ContentModeration #CSAM: During a two-day test, researchers at the Stanford Internet Observatory found over 600 pieces of known or suspected child abuse material across some of Mastodon’s most popular networks, according to a report shared exclusively with The Technology 202.

Researchers reported finding their first piece of content containing child exploitation within about five minutes. They would go on to uncover roughly 2,000 uses of hashtags associated with such material. David Thiel, one of the report’s authors, called it an unprecedented sum.

“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,” said Thiel, referring to a technique used to identify pieces of content with unique digital signatures. Mastodon did not return a request for comment."

https://archive.fo/BEdp5#selection-641.0-693.48

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #ContentModeration #SocialMedia #EMFA: "Content shared by media service providers on VLOPs should not be exempt from moderation protocols through a carte blanche exception from regulation provisions.

If passed, Article 17 would:

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Africa #Kenya #ContentModeration #TikTok #Meta: "That means Meta can be sued in Kenya for labor rights violations, even though moderators are technically employed by a third party contractor.

Social media giant TikTok was watching the case closely. The company also uses outsourced moderators in Kenya, and in other countries in the global south, through a contract with Luxembourg-based Majorel. Leaked documents obtained by the NGO Foxglove Legal, seen by WIRED, show that TikTok is concerned it could be next in line for possible litigation.
“TikTok will likely face reputational and regulatory risks for its contractual arrangement with Majorel in Kenya,” the memo says. If the Kenyan courts rule in the moderators’ favor, the memo warns “TikTok and its competitors could face scrutiny for real or perceived labor rights violations.”"

https://www.wired.com/story/tiktok-leaked-documents/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Twitter #Musk #ContentModeration #HateSpeech #Advertising: "Elon Musk's Twitter acquisition, and the series of content policy changes that ensued, has led to a dramatic spike in hateful, violent and inaccurate posts on the platform, according to researchers. That's now the top challenge for Twitter's new Chief Executive Officer Linda Yaccarino, who has to address advertisers’ concerns about the trend in order to boost revenue and pay back the company's debts.

Musk and Yaccarino have touted updates to the site’s policies, such as letting advertisers prevent their posts from showing up next to certain kinds of content. Still, advertising sales are down by half since Musk took control of the company in October, he said this week. That’s in part because businesses don’t believe there has been significant progress in resolving the problem."

https://www.bloomberg.com/news/articles/2023-07-19/twitter-s-surge-in-harmful-content-a-barrier-to-advertiser-return#xj4y7vzkg

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #EC #DSA #ContentModeration #SocialMedia #API: "Platforms have made an awful lot of mistakes over the years by rushing to ship products without waiting to identify or fix their problems. The EU Commission has been falling into a similar pattern lately. Its 2019 copyright filtering mandate, for example, was passed before lengthy consultations that illustrated the near-impossibility of actually complying with the law. The Commission wound up issuing detailed and complex guidance telling Member States how to implement that law, but published the guidelines just days before those countries’ 2021 deadline for doing so. Most countries simply, and understandably, missed their compliance deadlines — with little or no consequence. Platforms regulated by the DSA, and struggling to interpret late-arriving information like the database API specifications, may not have such leeway. The database API specifications are just one aspect of the DSA being rushed to launch this summer. We should hope that EU regulators, with their new authority to shape platforms’ technologies and behavior, don’t replicate platforms’ own mistakes."

https://cyberlaw.stanford.edu/blog/2023/07/rushing-launch-eus-platform-database-experiment

patrickokeefe, to trustandsafety
@patrickokeefe@mastodon.social avatar

When moderation, trust, and safety features get co-opted by marketing.

#CommunityManagement #ContentModeration #TrustAndSafety

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Reddit #APIs #OpenData #ContentModeration: "In our survey, we learned the following about moderators and researchers’ use of the API:

  • API access is fundamentally about safety on the platform, through the direct work of moderators and the support for safety provided by researchers
  • Safety, accessibility, and spam management on Reddit relies on software created by moderators and researchers in the face of over a decade of under-investment by the company in content moderation— software that depends on API access.
  • API disruptions are putting the careers of students and junior scholars at risk, as well as millions of dollars in grant funded research.
  • Reddit has made vague promises to provide free API access to researchers and moderators. But the company’s promises fail to meet the full needs of researchers and communities, and gives Reddit the ability to block research uses.
  • To date, negotiations with Reddit are stalled, and the company needs to be more responsive to community needs."

https://independenttechresearch.org/reddit-survey-results/

GottaLaff, to random
@GottaLaff@mastodon.social avatar

For those asking:

Via @emptywheel

Periodical update for those asking.

I'll remain here [Twitter] so long as there's a purpose (could be hours).

I am emptywheel@mastodon.social.
I'm emptywheel at bluesky, once that opens up.

I have no plans to join Threads (and couldn't if I wanted to yet, bc I live in GDPRtopia).

spocko,
@spocko@mastodon.online avatar

@GottaLaff @emptywheel so I was listening to the #SistersInLaw podcast today and they are telling people to contact them on #Twitter or #Threads. I think this is Corp America saying "We think Threads is #brandsafe and our audience is there."

What this tells me is that we need to put pressure of #Meta to enforce their #ContentModeration & #CommunityGuidelines.
Start by showing the RW accounts that are violating the #TOS & demand they be removed.

glecharles, to random
@glecharles@zirk.us avatar

I missed the latest shenanigans from the clowns running #Substack, but it looks like I'll definitely be changing my newsletter host when I'm back from vacation rather than waiting to find something better because they're clearly happy being the Nazi Bar. FTS.

#newsletters #ContentModeration #SocialReboot

https://open.substack.com/pub/katz/p/substack-triples-down?utm_source=share&utm_medium=android&r=4861

communitysignal, to trustandsafety
@communitysignal@mastodon.social avatar

“[When it comes to moderator tools], it’s often the community of people who need something driving it more so than the platforms themselves.” -@patrickokeefe

▶️https://www.communitysignal.com/the-chief-community-officer-hype-machine/

#OnlineCommunities #CommunityManagement #CommunityManager #CMGR #ContentModeration #TrustAndSafety

dcreemer, to threads
@dcreemer@sfba.social avatar

Thinking about #threads and #contentmoderation

“Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. [...] We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.”
― Karl R. Popper

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #Chatbots #Kenya #ContentModeration #DataAnnotation: "Annie Minoff: Data annotation basically means labeling images or text passages so that AI systems can learn from them. For example, labeling thousands of pictures of street scenes so that an AI system can learn what a stop sign or a tree looks like. But Bill's team wouldn't be labeling images for long, because in November of 2021 the job changed. Sama had a new client, OpenAI.

Karen Hao: OpenAI had basically tens of thousands of text passages that they needed labeled. So they would deliver these on a regular basis to Sama and workers would read each text passage one by one and then assign a label to it.

Annie Minoff: OpenAI wanted a system where if you asked the AI to write something awful, like a description of a child being abused or a method for ending your own life, the system would refuse to write that. It would filter out those bad responses before they got to you. But to do that, the AI has to know what child abuse and suicide are. Humans have to teach it. And that was the Sama worker's job, to read descriptions of extreme violence, rape, suicide, and to categorize those texts for the AI. Here's Bill, the team leader."

https://www.wsj.com/podcasts/the-journal/the-hidden-workforce-that-helped-filter-violence-and-abuse-out-of-chatgpt/ffc2427f-bdd8-47b7-9a4b-27e7267cf413

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • anitta
  • InstantRegret
  • GTA5RPClips
  • cubers
  • ethstaker
  • normalnudes
  • tacticalgear
  • cisconetworking
  • tester
  • Leos
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines