Nonilex, to Israel
@Nonilex@masto.ai avatar
remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "Days after the Israel-Hamas war erupted last weekend, social media platforms like Meta, TikTok and X (formerly Twitter) received a stark warning from a top European regulator to stay vigilant about disinformation and violent posts related to the conflict.

The messages, from European Commissioner for the internal market Thierry Breton, included a warning about how failure to comply with the region’s rules about illegal online posts under the Digital Services Act could impact their businesses.

“I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed,” Breton wrote to X owner Elon Musk, for example.

The warning goes beyond the kind that would likely be possible in the U.S., where the First Amendment protects many kinds of abhorrent speech and bars the government from stifling it. In fact, the U.S. government’s efforts to get platforms to moderate misinformation about elections and Covid-19 is the subject of a current legal battle brought by Republican state attorneys general."

https://www.cnbc.com/2023/10/13/why-x-and-meta-face-pressure-from-eu-on-israel-hamas-war-disinformation.html

consideration, (edited ) to trustandsafety

We have a new job opening at @CenDemTech
This is for a Research Fellow to help lead a new project that examines how content moderation systems, including the application of artificial intelligence tools, operate in non-English contexts, particularly in “low resource” and indigenous languages of the Majority World (Global South).

https://cdt.org/job/research-fellow/

itnewsbot, to instagramreality
@itnewsbot@schleuss.online avatar

As Red States Curb Social Media, Did Montana’s TikTok Ban Go Too Far? - Montana is at the forefront of a wave of new tech laws passed by Republican-led states. S... - https://www.nytimes.com/2023/10/12/technology/red-states-montana-tiktok-ban.html (bytedance)

communitysignal, to twitter
@communitysignal@mastodon.social avatar

From November: “There’s no way possible with the cuts [Musk has] made that he’s going to be able to do any type of content moderation. … [He] isn’t going to have anybody who remotely begins to know to how to do that [legal compliance and related work].” –@RALSpencer

▶️https://www.communitysignal.com/elon-musks-quest-to-make-twitter-worse/

willoremus, to random
@willoremus@mastodon.social avatar

On YouTube, TikTok, Facebook & Instagram, any support for Hamas is banned.

On Telegram, Hamas openly posts grisly videos.

On X, there are plenty of rules but it's not clear anyone's enforcing them.

Wrote about the war and content moderation: https://www.washingtonpost.com/technology/2023/10/11/tiktok-youtube-israel-hamas-content-moderation/ w/ @NaomiNix

researchbuzz,
@researchbuzz@researchbuzz.masto.host avatar

@willoremus @NaomiNix

Thank you.

'In deciding what posts to take down during a war, social media companies have to weigh their interest in shielding users from violent, hateful and misleading content against the goals of allowing free expression, including newsworthy material and potential evidence of war crimes, said Evelyn Douek, an assistant professor at Stanford Law School.'

https://wapo.st/3PUZthV

remixtures, to instagramreality Portuguese
@remixtures@tldr.nettime.org avatar

: "I ceased posting on Twitter six days ago, and the feeling is liberating. While I periodically check my DMs, only a handful of people have noticed my departure, so I may continue to monitor messages regularly. A cursory check confirmed that my decision to depart was justified, revealing a steadily emptying timeline. I’ve opted not to delete my account, clinging to the slim hope that Musk may grow bored and sell the platform, though I recognise this is just a fool’s hope. I’ve also chosen to keep posting blog updates automatically, with replies disabled or reduced, so that I’m not tempted back.

What’s been notably interesting is that I anticipated missing Twitter more than I actually do. Honestly, it feels refreshing, like a burdensome weight has been lifted. I was unaware of how the app’s accumulating negative energy was impacting me, and I am thankful for making this decision. “Never say never” is a prudent course of action, perhaps a future lapse will draw me back in. For now, however, I am content with my choice.

So long and thanks for all the memes."

https://www.technollama.co.uk/its-time-to-leave-twitter

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "“These 'general purpose’ models cannot be made safe because there is no single consistent notion of safety across all application contexts,” said Biderman. “What is safe for primary school education applications doesn't always line up with what is safe in other contexts.”

Even so, the results demonstrate that these tools—which, like all AI systems, are deeply embedded with human bias—seem to lack even the most obvious defenses against misuse, let alone protections for peoples’ creative work. And they also speak volumes about the apparent reckless abandon with which companies have plunged into the AI craze.

“Before releasing any AI software, please hand it to a focus group of terminally online internet trolls for 24 hours,” wrote Micah, a user on Twitter competitor Bluesky. “If you aren’t OK with what they generate during this time period, do not release it.”"

https://www.vice.com/en/article/88xdez/generative-ai-is-a-disaster-and-companies-dont-seem-to-really-care

remixtures, to Canada Portuguese
@remixtures@tldr.nettime.org avatar

: "The Canadian government plans to regulate the use of artificial intelligence in search results and when used to prioritize the display of content on search engines and social media services. AI is widely used by both search and social media for a range of purpose that do not involve ChatGPT-style generative AI. For example, Google has identified multiple ways that it uses AI to generate search results, provide translation, and other features, while TikTok uses AI to identify the interests of its users through recommendation engines. The regulation plans are revealed in a letter from ISED Minister François-Philippe Champagne to the Industry committee studying Bill C-27, the privacy reform and AI regulation bill. The government is refusing to disclose the actual text of planned amendments to the bill.

The current approach in Bill C-27 leaves the question of which AI systems should be viewed as high impact to a future regulatory approach. The letter says the government now plans to identify the high impact systems within the bill and drop the future regulatory process. While many of the proposed high impact system are unsurprising and largely mirror similar rules in the European Union, the inclusion of search and social media is key exception. The government is targeting the following classes of AI systems:"

https://www.michaelgeist.ca/2023/10/canada-plans-to-regulate-search-and-social-media-use-of-artificial-intelligence-for-content-moderation-and-discoverability/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "4chan users are coordinating a posting campaign where they use Bing’s AI text-to-image generator to create racist images that they can then post across the internet. The news shows how users are able to manipulate free to access, easy to use AI tools to quickly flood the internet with racist garbage, even when those tools are allegedly strictly moderated.

“We’re making propaganda for fun. Join us, it’s comfy,” the 4chan thread instructs. “MAKE, EDIT, SHARE.”

A visual guide hosted on Imgur that’s linked in that post instructs users to use AI image generators, edit them to add captions that make them seem like political campaigns, and post them to social media sites, specifically Telegram, Twitter, and Instagram. 404 Media has also seen these images shared on a TikTok account that has since been removed."

https://www.404media.co/4chan-uses-bing-to-flood-the-internet-with-racist-images/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

:
"Microsoft’s Bing Image Creator, produced by one of the most brand-conscious companies in the world, is heavily filtered: images of real humans aren’t allowed, along with a long list of scenarios and themes like violence, terrorism, and hate speech. It launched in March, and since then, users have been putting it through its paces. That people have found a way to easily produce images of Kirby, Mickey Mouse or Spongebob Squarepants doing 9/11 with Microsoft’s heavily restricted tools shows that even the most well-resourced companies in the world are still struggling to navigate issues of moderation and copyrighted material around generative AI.

I came across @tolstoybb’s Bing creation of Eva pilots from Neon Genesis Evangelion in the cockpit of a plane giving a thumbs-up and headed for the twin towers, and found more people in the replies doing the same with LEGO minifigs, pirate ships, and soviet naval hero Stanislav Petrov. And it got me thinking: Who else could Bing put in the pilot’s seat on that day?"

https://www.404media.co/bing-is-generating-images-of-spongebob-doing-9-11/

eff, to Bulgaria
@eff@mastodon.social avatar

The EU could lead to marginalized groups who are often targeted with hate speech facing arbitrary content moderation and discrimination—and could have worldwide effects, EFF’s Christoph Schmon & Paige Collings write for @TechCrunch https://techcrunch.com/2023/10/03/the-eu-media-freedom-act-is-a-dangerous-law/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

#SocialMedia #ContentModeration #PlatformGovernance #OpenAccess: "We are pleased to announce the publication of our new special issue in Social Media+ Society on “Trust and Safety on Social Media: Understanding the Impact of Anti-Social Behavior and Misinformation on Content Moderation and Platform Governance,” edited by Anatoliy Gruzd, Felipe Bonow Soares, and Philip Mai.

The issue features eleven open access peer-reviewed articles that examine “the rise of anti-social behavior, misinformation, and other forms of problematic content within and across various social media settings, contexts, and user groups. The special issue featured work that emerged from the presentations and deliberations by interdisciplinary scholars at the 2022 International Conference on Social Media & Society, organized by the Social Media Lab at Toronto Metropolitan University. The issue explores two dangerous and interconnecting trends: the rise of anti-social behavior and the spread of misinformation online. Its aim is to help the public, policymakers, and platform operators better understand the factors contributing to the rise of these minacious trends on social media.”

The issue was truly a team effort. We would like to extend a heartfelt thank you to all contributing authors, reviewers and guest editors for contributing to the publication of this timely and in-depth issue and to SM+S publisher and journal editor Zizi Papacharissi for supporting the special issue from its inception to publication.

All articles are free to read and accessible at the links below."

https://socialmedialab.ca/2023/10/03/announcing-a-new-special-issue-of-social-mediasociety-on-trust-and-safety-on-social-media-understanding-the-impact-of-anti-social-behavior-and-misinformation-on-content-moderation-and-platf/

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "A small community of people who search for adult content on YouTube has discovered a bug that allows them to continue hosting porn on YouTube, even if their channels are deleted.

The person who claims to have originally discovered the exploit and explained how it works in a YouTube video has since had that video removed and says that YouTube has fixed the bug, but as I am typing this my second monitor is playing a very explicit hentai that’s been up on YouTube since at least September 21, posted by a YouTube channel that was “terminated” at least five days earlier. YouTube only removed this video after I sent it the link directly."

https://www.404media.co/people-exploited-youtube-bug-to-upload-porn-that-cannot-be-deleted/

itnewsbot, to instagramreality
@itnewsbot@schleuss.online avatar

SCOTUS to decide if Florida and Texas social media laws violate 1st Amendment - Enlarge (credit: Pitiphothivichit | iStock / Getty Images Plus)

... - https://arstechnica.com/?p=1972299 #x

senficon, to random
@senficon@ohai.social avatar

The source code for the European Commission’s new transparency database is on GitHub: https://github.com/digital-services-act/transparency-database

senficon, to instagramreality
@senficon@ohai.social avatar

The European Commission has launched its transparency database of platforms‘ decisions (statements of reasons in lingo). It already contains thousands of records from platforms such as that will hopefully give some quantitative insights into their content moderation approaches, the degree of automation etc. Unfortunately, the qualitative information on individual decisions is very limited. https://transparency.dsa.ec.europa.eu/statement

senficon,
@senficon@ohai.social avatar

I hope the restrictions on csv exports of the transparency information will be lifted by the Commission. So far, we’ve had voluntary sharing of info by some platforms through @LumenDatabase. This mandatory database could offer so many more insights, but the data needs to be fully downloadable. I advocated for this transparency provision in the , hope journalists find it useful! @josephcox @jasonkoebler @samleecole

MrBerard, to Facebook
@MrBerard@pilote.me avatar

Do you remember that one time when someone uploaded a of Mark to to prove a point about , but people immediately saw it wasn't real because the facial expressions looked too human for Zuck?

hrefna, to mastodon
@hrefna@hachyderm.io avatar

We need finer grained controls than what are provided today by for

For instance, what if the tooling allowed individuals or instances to say:

  • "Only allow direct messages from this instance from people who are followed or where there is explicit consent to receive a direct message"

  • "Automatically hide all content (under something like what's called a CW today) from this server unless there's explicit permission to show it from the user"

  • "Block replies to this post"

paninid,
@paninid@mastodon.world avatar

@hrefna @mekkaokereke
It’s a fascinating democratic technosocial governance exercise to witness the policy discussions happening on the that should have occurred on social media networks.

DaniDanis, to random
@DaniDanis@mstdn.social avatar
remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "As online platforms grow, they find themselves increasingly trying to balance two competing priorities: individual rights and public health. This has coincided with the professionalization of platforms’ trust and safety operations—what we call the “customer service” model of online governance. As professional trust and safety teams attempt to balance individual rights and public health, platforms face a crisis of legitimacy, with decisions in the name of individual rights or public health scrutinized and criticized as corrupt, arbitrary, and irresponsible by stakeholders of all stripes. We review early accounts of online governance to consider whether the customer service model has obscured a promising earlier model where members of the affected community were significant, if not always primary, decision-makers. This community governance approach has deep roots in the academic computing community and has re-emerged in spaces like Reddit and special purpose social networks and in novel platform initiatives such as the Oversight Board and Community Notes. We argue that community governance could address persistent challenges of online governance, particularly online platforms’ crisis of legitimacy. In addition, we think community governance may offer valuable training in democratic participation for users."
https://journals.sagepub.com/doi/10.1177/20563051231196864

itnewsbot, to california
@itnewsbot@schleuss.online avatar

X sues Calif. to avoid revealing how it makes “controversial” content decisions - Enlarge (credit: Bloomberg / Contributor | Bloomberg)

Today, E... - https://arstechnica.com/?p=1966853 #x

remixtures, to internet Portuguese
@remixtures@tldr.nettime.org avatar

: "In order to challenge and defeat this hierarchy of hate, we argue, it is important to pursue many different technological, legal, social and political means, including by forcing platforms to govern and work better, to recognise and take down such hate far more swiftly, and to support those at the receiving end. However, we are very clear: simple technocentric explanations and suggested solutions that do not include complex attempts to recognise and work against social injustices and discrimination and challenge the normalisation of violence and supremacy in the media and political speech, will end in failure and further bolster those who circulate hate on and off-line."
https://blogs.lse.ac.uk/medialse/2023/08/02/a-hierarchy-of-hate/

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "It’s one thing to come up with an ambitious rulebook. It’s another to successfully enforce it.

The content-moderation law has serious potential to bite. The law provides for stronger fines than its GDPR sister rulebook — 6 percent of companies’ annual revenue, compared with 4 percent. Led by the team that wrote the law — and knows it inside and out — the Commission will have broad enforcement powers, similar to antitrust investigators’, to oversee and ensure the compliance of the biggest tech firms. It will also receive extra yearly funding — an estimated €45 million for 2024 — funded through an annual levy from the Big Tech firms themselves.

The teams in Brussels will be backed by dozens of artificial intelligence and computer scientists at the Commission's European Centre for Algorithmic Transparency (ECAT). And the Commission will also cooperate with national EU digital regulators, including in Ireland, where most of the affected tech firms have their EU headquarters."

https://www.politico.eu/article/digital-services-act-dsa-online-content-law-europe-teeth-bite/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • tacticalgear
  • magazineikmin
  • khanakhh
  • everett
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • ethstaker
  • InstantRegret
  • thenastyranch
  • JUstTest
  • ngwrru68w68
  • cisconetworking
  • cubers
  • osvaldo12
  • modclub
  • GTA5RPClips
  • tester
  • Durango
  • provamag3
  • anitta
  • Leos
  • normalnudes
  • lostlight
  • All magazines