Gh0stVX

@Gh0stVX@mastodon.gamedev.place

Alone, afraid, but still standing. A #GodotEngine user.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Gh0stVX, to trustandsafety

I want to find a solution to tools like inadvertently advertising content they oppose. I want to design a safety tool different in execution from and .
I'm introducing an idea called , where servers can moderate servers blindly, based off info supplied by the moderated server itself. Has a server ever asked another to BE suspended, or to have their media rejected? Well, with this they can, to ALL applicable servers.

Gh0stVX,

For a use case, Japanese law state NSFW art must be censored. Instead of having to wait for uncensored art to appear on then timeline, get reported, and have the single server be moderated, they could just set a rule to reject media from instances with the "UncensoredNSFW" fediflag, and each server with that flag would have that applied, as the flag goes with the rest of the post data. Moderation surrounding NSFW benefits the most.

Gh0stVX,

This could also be useful for servers trying to avoid certain topics, by limiting servers with e.g. the "UKWeather" fediflag. Anything that gets a CW would have a fediflag, along with other ones for fandoms, groups, etc. They could be useful for finding servers.
They should play nice with deny and allow-lists. We could allow advanced rules, where moderation actions are taken on servers with combinations of flags, but not one or the other, and ones without certain flags.

Gh0stVX,

This would also reduce pressure on blocklists, as they could start off by introducing the tags used, either by name or by describing the focus of the flag moderation rules, and means the individual servers aren't being advertised. We could also fediblock servers for not using fediflags when they should be used. And users should be able to apply their own fediflag moderation rules.
Fediflags act as the first line of defense. They won't get everything, but they'll help.

Gh0stVX,

I should note this isn't an original idea (unbeknownst to me at the start of the thread) It turns out @hrefna had the same idea, so credit to xem (and probably others). But this idea I think could be implemented much faster than other safety ideas. We already get data about the server a post is from, like the server name, and we already perform moderation checks. It feels like it could be hooked into ActivityPub fairly easily. I'm not a software engineer, though.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • InstantRegret
  • mdbf
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • JUstTest
  • Durango
  • everett
  • tester
  • cisconetworking
  • Leos
  • cubers
  • modclub
  • ngwrru68w68
  • tacticalgear
  • anitta
  • provamag3
  • normalnudes
  • lostlight
  • All magazines