remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "What rarely gets mentioned in these discussions, however, is the fact that the Chinese government has built the most comprehensive digital surveillance system in the world, which it primarily uses not to protect children, but to squash any form of dissent that may threaten the power of the Chinese Communist Party. “Everybody exists in a censored environment, and so what gets censored for kids is just one step on top of what gets censored for adults,” Jeremy Daum, a senior research scholar at Yale Law School’s Paul Tsai China Center and the founder of the site China Law Translate, told me.

It should set off warning bells for Americans that many states have explored legislation limiting internet access for minors in ways that mirror what China has done."

https://www.theatlantic.com/technology/archive/2024/05/tiktok-chinese-version/678325/

paninid, to TikTok
@paninid@mastodon.world avatar

This ran afoul of #TikTok community guidelines.

It is still up on #Facebook / #Instagram, though? 🤨🤷🏻‍♂️

Interesting what one platform considers okay for its #ContentModeration rules and another doesn’t, relative to Mastodon.

The Chinese Communist Party will let you throw shade at failed Austrian artists, but Zuck protects them.

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "So you joined a social network without ranking algorithms—is everything good now? Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI, has doubts. “There is now a bunch of research showing that chronological is not necessarily better,” he says, adding that simpler feeds can promote recency bias and enable spam.

Stray doesn’t think social harm is an inevitable outcome of complex algorithmic curation. But he agrees with Rogers that the tech industry’s practice of trying to maximize engagement doesn’t necessarily select for socially desirable results.
Stray suspects the solution to the problem of social media algorithms may in fact be … more algorithms. “The fundamental problem is you've got way too much information for anybody to consume, so you have to reduce it somehow,” he says."

https://www.wired.com/story/latest-online-culture-war-is-humans-vs-algorithms/

remixtures, to apple Portuguese
@remixtures@tldr.nettime.org avatar

: "Apple has removed a number of AI image generation apps from the App Store after 404 Media found these apps advertised the ability to create nonconsensual nude images, a sign that app store operators are starting to take more action against these types of apps.

Overall, Apple removed three apps from the App Store, but only after we provided the company with links to the specific apps and their related ads, indicating the company was not able to find the apps that violated its policy itself.

Apple’s action comes after we reported on Monday that Instagram advertises nonconsensual AI nude apps. By browsing Meta’s Ad Library, which archives ads on its platform, when they ran, on what platforms, and who paid for them, we were able to find ads for five different apps, each with dozens of ads. Two of the ads were for web-based services, and three were for apps on the Apple App Store. Meta deleted the ads when we flagged them. Apple did not initially respond to a request for comment on that story, but reached out to me after it was published asking for more information. On Tuesday, Apple told us it removed the three apps on its App Store." https://www.404media.co/apple-removes-nonconsensual-ai-nude-apps-following-404-media-investigation/

remixtures, to news Portuguese
@remixtures@tldr.nettime.org avatar

: "Our investigation found that fact-checks enjoy greater visibility in Google Web Search compared to the articles they seek to correct, both in terms of frequency of appearance and their placement within the SERP rankings. Specifically, our study shows fact-checks rank higher than problematic content across five topical keywords groups, Covid-19, climate change, the war in Ukraine, U.S. liberals and U.S. elections, except in contested stories related to the war in Ukraine, where articles about U.S. bio-labs share equal prominence with their corresponding fact-checks. The findings imply Google moderation effects, as fact-checking content is more prominent given (nearly) equal levels of optimisation. It also implies that fact-checks are generally more prominent for audiences searching for problematic content, though both often appear in the same SERP. Navigational queries (e.g., searching for the name of a source and that content) reduce moderation effects." https://dl.acm.org/doi/abs/10.1145/3614419.3644017

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "For all their efforts to moderate content and reduce online toxicity, social media companies still fundamentally care about one thing: retaining users in the long run, a goal they’ve perceived as best achieved by keeping them engaged with content as long as possible. But the goal of keeping individuals engaged doesn’t necessarily serve society at large and can even be harmful to values we hold dear, such as living in a healthy democracy.

To address that problem, a team of Stanford researchers advised by Michael Bernstein, associate professor of computer science in the School of Engineering, and Jeffrey Hancock, professor of communication in the School of Humanities and Sciences, wondered if designers of social media platforms might, in a more principled way, build societal values into their feed-ranking algorithms. Could these algorithms, for example, promote social values such as political participation, mental health, or social connection? The team tested the idea empirically in a new paper that will be published in Proceedings of the ACM on Human-Computer Interaction in April 2024. Bernstein, Hancock, and a group of Stanford HAI faculty also explored that idea in a recent think piece.

For their experiment, the researchers aimed to decrease partisan animosity by building democratic values into a feed-ranking algorithm. “If we can make a dent in this very important value, maybe we can learn how to use social media rankings to affect other values we care about,” says Michelle Lam, a fourth-year graduate student in computer science at Stanford University and co-lead author of the study." https://hai.stanford.edu/news/building-social-media-algorithm-actually-promotes-societal-values

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Facebook #Meta #Censorship #BigTech #ContentModeration: "Meta blocked a newspaper’s critical report about it on Facebook and its other social sites for hours, sparking a backlash that intensified after the company appeared to subsequently block links to the website of an independent journalist who republished the report.

The controversy began Thursday morning when users noticed that all links to the non-profit newspaper the Kansas Reflector had been flagged as a cybersecurity threat and their posts were removed. About seven hours later, the paper said, most of its links had been restored, save for one — a column that had criticized Facebook and accused it of suppressing posts related to climate change.

Meta apologized to the Reflector and its readers on Thursday for what the company’s communications chief, Andy Stone, called a “an error that had nothing to do with the Reflector’s recent criticism of Meta.”

But on Friday, users who attempted to share the column on Facebook, Instagram or Threads, were shown a warning that it violated community guidelines. That seemed suspicious to Marisa Kabas, an independent journalist in New York, who asked the Reflector for permission to publish the text of the column on her own website, the Handbasket." https://edition.cnn.com/2024/04/05/tech/meta-nonprofit-newspaper-independent-journalist-alleged-censorship/index.html

paninid, to random
@paninid@mastodon.world avatar

You don’t get social media without anti-social people.

That is why #ContentModeration is a thing.

atomicpoet, to random
@atomicpoet@atomicpoet.org avatar

Oh man, with @potus joining the Fediverse, there’s been an outpouring of salt. 😆

paninid,
@paninid@mastodon.world avatar

@samxavia @dogzilla @6G @atomicpoet

I think it’s easier to declare defederation from Threads as a policy if you’re less exposed or aware of the toxic sewer system that is #ContentModeration.

The stuff tolerated by Meta wouldn’t last very long on any given Mastodon instance.

And yet.

communitysignal, to trustandsafety
@communitysignal@mastodon.social avatar
remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #ContentModeration #Reddit: "How do you put out a dumpster fire? Don’t call in professional firefighters; let volunteers show up with buckets and get to work.

This is, in essence, how Reddit, the 19-year-old social media site, turned itself from one of the darkest corners of the web into something of a model for how content moderation should be done. The San Francisco company rode that transformation, which relied on an army of volunteer monitors, all the way to Wall Street this month, when it pulled off one of the most anticipated tech offerings in years.

Its shares rocketed to $74 in its first days on the Nasdaq exchange after debuting at $34. The stock gave up some ground to just under $50 at Thursday’s pre-Easter close, valuing the company at $8 billion (£6.5 billion).

Reddit’s arrival on the public market marked a remarkable coming of age for a company that once revelled in its status as a free-for-all chock full of internet trolls and “not safe for work” posts. The latter included the infamous episode a decade ago when unauthorised nude photos were shared of celebrities including Rihanna and Jennifer Lawrence. The response of Reddit’s then chief executive was to leave up the offending material. In a post entitled “Every Man Is Responsible For His Own Soul”, he explained that the firm was not so much a website as a “government of a new type of community” — a community that happened to have a rabid belief in free expression." https://www.thetimes.co.uk/article/7d352b65-464e-4dc7-906d-62c014b88679?shareToken=00aeb577b8820278fd9a02e8761f70ca

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #ContentModeration #Censorship #FreedomOfSpeech #Authoritarianism: "I worry that liberals and some on the left routinely downplay the threat to speech that these platforms and the prospect of government control over them present. This is in part because there are few staunch defenders of free speech among their ranks these days. It’s not hard to see why this is. The bad faith invocation of free speech has been used by some heinous characters to defend online harassment, doxxing, and surveillance-based micro-targeting. Moreover, the past two decades have witnessed the gauche instrumentation of the First Amendment to argue for corporations’ rights to do whatever the fuck they want, including the tech industry’s brandishing the constitution to defend their metastatic business model.

It’s true, there is a lot of disingenuous nonsense when it comes to free speech discourse. But this doesn’t mean we should confuse these essential rights with the actors who speciously invoke them — something we often see in the liberal tendency to deny that centralized platform control of speech is a significant problem. The real problem, much liberal policy implies, is too little control of speech — too little monitoring, surveillance, and age-gating; too little trust, and too little safety; too many criminals hiding in shadows with not enough national security oversight; and too little U.S. ownership and “control.” The all-too-commonly proffered solution to the harms that flow from platform surveillance practices and business models is to ensure that they are wisely governed by upstanding people applying appropriate norms and standards. The fight, in other words, is aimed at expanding power over these platforms to governments and sometimes NGOs. With the counterfactual vision of an ordered and just state standing in for any critical thinking about who will actually exercise such power, and how." https://lpeproject.org/blog/social-media-authoritarianism-and-the-world-as-it-is/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Discord #ContentModeration #Censorship: "In an unusual weaponization of content moderation tools, members of hacking and fraud focused Discord servers are deliberately uploading child abuse imagery to have their rivals’ servers shut down, 404 Media has found.

Vendors in the digital underground have sold banning services for sites like Instagram for years. What makes this Discord attack different is that it is much more clearly criminal in nature—both for the person uploading child abuse imagery to a target server, and potentially for someone who downloads it, accidentally or otherwise. Broadly, the Discord attack is a continuation of using content moderation systems that are designed to protect users, but which can be leveraged maliciously to target others.

“My Discords been getting banned a lot recently,” one Discord administrator told 404 Media in an online chat. “Someone posted CP [child porn] in it and reported the server so it gets banned.”" https://www.404media.co/criminals-are-weaponizing-child-abuse-imagery-to-ban-discord-servers/

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #DMA #DSA #BigTech #SocialMedia #ContentModeration #Monopolies: "As the EU’s new flagship tech laws, the Digital Services Act and the Digital Markets Act, are coming into full application, Big Tech is working hard to shoot them down. As of today, the Digital Markets Act (DMA) becomes fully applicable, following its counterpart the Digital Services Act (DSA) on 17 February.

However, as the EU’s new tech laws are coming into full application, tech corporations like Apple, Amazon, Meta and TikTok are already undermining them at every turn. To subvert these new regulations, tech corporations have filed a number of lawsuits against the European Commission and attempted to weaken the rules with malicious compliance that protects their profits at the expense of their users.

Malicious compliance pretends to follow the letter of the law in such a way that ignores or otherwise sabotages the law’s intent. That’s why civil society organisations like EDRi are holding tech corporations to account for their actions and support the European Commission in fully utilising its oversight authority.

The DSA regulates how social media platforms deal with potentially illegal online content uploaded by their users, without unduly limiting people’s freedom of expression. The DMA contains powerful obligations and prohibitions to prevent those tech firms from monopolising key markets like smartphones, chat apps, app stores, and more." https://edri.org/our-work/delay-depress-destroy-how-tech-corporations-subvert-the-eus-new-digital-laws/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "We conclude that, while there is a clear need for protecting children online, there are currently no age assurance method that adequately protect individuals’ fundamental rights. The risks associated with the implementation of age assurance include privacy intrusion, data leak, behavioural surveillance, identity theft, and impeded autonomy. Moreover, while none of the methods reviewed could attest user’s age with certainty, the implementation of such measures may exacerbate existing discrimination against already disadvantaged groups of society, likely widen the digital divide and lead to further exclusion.

Promising privacy-preserving techniques, e.g, digital identities and double-blind transmission methods, are under development. These may offer improved user privacy protection by enabling anonymous age assurance. However, important security and inclusivity risks remain. Moreover, these technologies face implementation challenges, given the current absence of a pan-European technical and legal framework to support their wide adoption." https://www.greens-efa.eu/en/article/document/trustworthy-age-assurance

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #Facebook #Meta #Censorship #ContentModeration #Israel #Gaza #Palestine: "CITING THE COMPANY’S “failure to provide answers to important questions,” Sens. Elizabeth Warren, D-Mass., and Bernie Sanders, I-Vt., are pressing Meta, which owns Facebook and Instagram, to respond to reports of disproportionate censorship around the Israeli war on Gaza.

“Meta insists that there’s been no discrimination against Palestinian-related content on their platforms, but at the same time, is refusing to provide us with any evidence or data to support that claim,” Warren told The Intercept. “If its ad-hoc changes and removal of millions of posts didn’t discriminate against Palestinian-related content, then what’s Meta hiding?”

In a letter to Meta CEO Mark Zuckerberg sent last December, first reported by The Intercept, Warren presented the company with dozens of specific questions about the company’s Gaza-related content moderation efforts. Warren asked about the exact numbers of posts about the war, broken down by Hebrew or Arabic, that have been deleted or otherwise suppressed.

The letter was written following widespread reporting in The Intercept and other outlets that detailed how posts on Meta platforms that are sympathetic to Palestinians, or merely depicting the destruction in Gaza, are routinely removed or hidden without explanation.

A month later, Meta replied to Warren’s office with a six-page letter, obtained by The Intercept, that provided an overview of its moderation response to the war but little in the way of specifics or new information." https://theintercept.com/2024/03/26/meta-gaza-censorship-warren-sanders/?utm_source=twitter&utm_campaign=theintercept&utm_medium=social

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "Today, we live with the irony that the intense pitch and total saturation of political conversation in every part of our lives—simply pick up your phone and rejoin the fray—create the illusion that important ideas are right on the verge of being actualized or rejected. But the form of that political discourse—millions of little arguments—is actually what makes it impossible to process and follow what should be an evolving and responsive conversation. We mistake volume for weight; how could there be so many posts about something with no acknowledgment from the people in charge? Don’t they see how many of us are expressing our anger? These questions elicit despair, because the poster believes that no amount of dissent will actually be heard. And when that happens, in any forum, the posters blame the mods.

The mods do have supporters: “normie” liberals and conservatives who still put a degree of faith in the expert and media classes and who want, more than anything, to restore some bright line of truth so that society can continue to function. A central question of our current moment is whether that faith is enough to unite a critical mass of voters, or whether the medium we have chosen for everything, from photos of our children to our most private conversations, will simply not allow for any consensus, especially one that appeals to a population as broadly complacent as the American consumer class. Normies, who are mostly unified in their defense of the status quo, still wield a reasonable amount of political power, and they will continue to exist in some form. But, as even more of our lives take place within the distortions of online life, how much longer will there be a widely agreed-upon status quo to defend?" https://www.newyorker.com/news/fault-lines/arguing-ourselves-to-death

casilli, to Facebook French
@casilli@mamot.fr avatar

Le 21-22 févr., a accueilli au Parlement européen Kauna Ibrahim Malgwi, ancienne modératrice de FB, membre du bureau du premier syndicat africain de modérateurs. Son témoignage empreint d'humanité et son engagement pour l'organisation des travailleurs ont profondément ému l'auditoire. https://www.humanite.fr/social-et-economie/facebook/kauna-moderatrice-pour-facebook-au-kenya-jai-vu-beaucoup-de-suicides-en-video interview by @pierricm

patrickokeefe, (edited ) to random
@patrickokeefe@mastodon.social avatar

Just remember, when you say things like "community management/content moderation is new" to make it seem like you're on the cutting edge of some previously unexplored valley, you sound like you're a Justice on the Supreme Court. We're around 50 years deep now.

Screenshot via @LeahLitman.

#SupremeCourt #Section230 #Censorship #ContentModeration #CommunityManagement

kbindependent, to Texas
@kbindependent@newsie.social avatar

Supreme Court casts doubt on Florida law regulating social media :

Justices seemed wary of a broad ruling, with Justice Amy Coney Barrett warning of "land mines" she and her colleagues need to avoid in resolving the two cases.
#X

https://kbindependent.org/2024/02/27/supreme-court-casts-doubt-on-florida-law-regulating-social-media/

casilli, to Europe
@casilli@mamot.fr avatar

The DiPLab crew was at the European Parliament a few days ago to organize a panel titled "Meet the human workers behind AI". We listened to the testimonies of microworkers, cloud workers, and internet moderators. Their voices, their struggles, and the solidarity of other platform workers. https://diplab.eu/diplab-on-the-european-parliaments-transnational-forum-of-alternatives-to-uberisation/

internetsociety, to internet
@internetsociety@techpolicy.social avatar

Today at 10am ET, the US Supreme Court hears oral arguments in two cases that may decide if you can moderate content on your website!

You can listen to the live audio stream at:
https://www.supremecourt.gov/oral_arguments/live.aspx

Read our post to understand why this case is so important for the open Internet:

https://www.internetsociety.org/blog/2023/12/can-you-kick-the-trolls-out-of-your-online-forum-u-s-supreme-court-to-decide/

A link is in that post to the amicus brief we submitted for the cases.

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #DSA #SocialMedia #TrustedFlaggers #ContentModeration: "One of the most-publicized innovations brought about by the Digital Services Act (DSA or Regulation) is the ‘institutionalization’ of a regime emerged and consolidated for a decade already through voluntary programs introduced by the major online platforms: trusted flaggers. This blogpost provides an overview of the relevant provisions, procedures, and actors. It argues that, ultimately, the DSA’s much-hailed trusted flagger regime is unlikely to have groundbreaking effects on content moderation in Europe."

https://verfassungsblog.de/the-dsas-trusted-flaggers/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #ContentModeration #Facebook #Meta #Israel #Gaza #Palestine #Genocide #Censorship: "Since October 2023, when Israeli forces began bombarding Gaza in response to the October 7 Hamas attack, Palestinian and pro-Palestinian voices have been censored and suppressed on Facebook and Instagram. Access Now’s new report illustrates how content removal, arbitrary account suspensions, and discriminatory enforcement of content moderation policies against Palestinian voices have been the norm through examples and documentation of:

  • Clear patterns of censorship, including arbitrary content removals, accounts suspensions, “shadow-banning,” and further arbitrary restrictions on pro-Palestinian people and content;
  • Flawed content moderation policies, with Meta stifling freedom of expression through its broad interpretation of the company’s Designated Organizations and Individuals (DOI) policy;
  • Biased rule enforcement, including Meta’s over-moderation of Arabic content compared to Hebrew content, and the company’s failure to adequately address hate speech, dehumanization, and genocidal rhetoric against Palestinians; and
  • Arbitrary and erroneous rule enforcement, with an unacceptable error rate in Meta’s automated decision-making, particularly in non-English languages."

https://www.accessnow.org/press-release/meta-systematic-censorship-palestinian-voices/

rolle, to random
@rolle@mementomori.social avatar
rolle,
@rolle@mementomori.social avatar

@admin Hmm, interesting. He joined on January 21st with a reason: ”I want to join the greate community and to chat with other members and to contribute to the community” and I suspended him February 6th because of spam. So he could have not caused further spam from my server because I do not have open registration. But I may be a target because of this.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • normalnudes
  • hgfsjryuu7
  • magazineikmin
  • thenastyranch
  • Youngstown
  • slotface
  • everett
  • ngwrru68w68
  • mdbf
  • kavyap
  • tsrsr
  • Durango
  • PowerRangers
  • DreamBathrooms
  • Leos
  • InstantRegret
  • khanakhh
  • osvaldo12
  • vwfavf
  • tacticalgear
  • rosin
  • cubers
  • cisconetworking
  • GTA5RPClips
  • ethstaker
  • tester
  • modclub
  • anitta
  • All magazines