moderation meta
Want an example of differing needs?
I would love to have a feature that, whenever a new instance makes contact (sends a post or reaction), "quarantines" that instance, that is, silences it immediately and flags it for manual review by the mods. That would, imo, massively cut down on spam and unmoderated bad actors and represent the best compromise between allowlist federation and unrestricted federation where every jojo can spin up a new instance every day.
Others have told me this is a terrible idea that will surely kill the entire fediverse and I'm a horrible, censorship-loving isolationist for suggesting it (you know who you are, not that you can read this post).
With a moderation API as I propose, both of us could implement our respective ideas of moderation without taking the fight to the developers.
What Mastodon needs, in one shape or form or another, and better sooner than later, is a comprehensive API that allows third-party extensions to interact with incoming and outgoing traffic.
It must be(come) possible for instances to implement features that automatically filter, report, flag for review, etc. received and sent (and locally created) posts, without having to rewrite and recompile the entire Mastodon software themselves.
There will never be complete consensus on what's needed for moderation, and both implementing options for all ideas, or just implementing the most "popular" (hard to determine on a decentralised network) suggestions, will leave many, many users and admins unsatisfied.
Opening up an interface to plug their own moderation tools into is the only feasible way that moderation can be improved in the way each instance wants while separating the development of moderation tools from the development of Mastodon itself.
Developing a perfect moderation tools suite that as many admins as possible are happy with is a workload Mastodon gGmbH can't handle, but leaving the tools as woefully inadequate as they are right now is a reputational damage the fediverse, as widely synonymous with Mastodon as it sadly is in the eyes of the public, can't handle.
Dear Eugen, do something.
Seriously, Eugen – I would tag him, but .social blocked me for criticising him one time too many –, seriously, you need to focus less on "growth" and making Mastodon look like Twitter, stop wasting effort on micromanaging design (cough hashtag bar, cough content note hazard bars), and refocus towards moderation.
Or to put it in words Mastodon gGmbH understands: "growth" is impossible to achieve if "your" network gets a reputation for being a spam-ridden hellhole.
Take a look at the development of email spam filters and take some lessons.
The current "Nicole the fediverse chick" spam wave once again highlights how woefully inadequate Mastodon's moderation tools are.
If there were a way to set up some filters that automatically flag all posts or accounts matching a pattern for manual review, maybe with the option to apply some preliminary action (hide posts or accounts) until review, we could just catch all accounts with usernames "@ nicole (digit digit)" automatically and be done with it.
If users have to report a problem, your moderation has already failed half its job. Chances are, users will block the offending user and your quick turnaround in processing the report won't be noticed.
So their only takeaway will be "I saw a problem".
The more issues you can address before users have to see and report them, the less problems your users will perceive.
Stuff like getting in touch with users to let them know you took care of their report (which I wish were built-in, but which we have to do manually by DM) can mitigate that perception problem to a certain extent, but even so, the perception will be "ho boy, the mods sure are putting out a lot of fires" when your goal should be that the house doesn't even start burning to begin with.
I don't want automated, algorithmic moderation, but if we want to avoid that and still grow the fediverse1, then we're going to need more semi-automated systems that support mods in identifying problems early and quickly.
1: as is evidently the goal of corporate and semi-corporate players like Mastodon gGmbH / future Mastodon foundation and W3C AP group
Fedi software that automatically flags all new instances with < 20 and > 2000 users for manual review.
Fedi software that keeps track of how often which unblocked instances boost content from blocked instances to you so moderators can see which instances are gateways to nasty corners of the 'verse.
Fedi software that automatically flags lots of similar posts for manual review.
I have so many dreams…
Fedi instance software that, whenever a new instance makes first contact, queries their API for whether they have open registrations and immediately limits them if they do.
A mod can dream…
For the love of god, when you report another user, describe the issue!
Yes, even if you've attached posts.
Yes, even if you think it's obvious.
And if you haven't reported any specific posts, and haven't described the issue, how do you expect us to do anything about it?
Also, keep in mind: if we suspend an account, their posts will be deleted after 30 days. Thanks to some genius at Mastodon gGmbH, they are not retained with the ticket for review purposes, not even links to the original posts. So if we later want to review why a certain account was suspended, that's a lot harder when the reporting user didn't send a description of the problem.
So for fuck's sake, use that text field! Please!
Trade offer:
We receive: description of your report
You receive: moderation
(This post represents the personal opinion of the author and is not an official statement, endorsed by, or representative of the collective opinion of the Eldritch Café moderation team.)
@gnulinux
> Vielleicht kann diese kurze Vorstellung dazu beitragen, damit ein wenig mehr Leben in das Forum und die Chatkanäle kommt.
Das wäre auf jeden Fall echt nice
#fediadmins #fedimods #mastoadmin #mastoadmins #mastomod #mastomods #fedimins
meta, twice removed; subtoot if you will, also general thoughts on meta, not currently related mentions of rape and slurs
Does #Tusky not let you select a reason for your report? I noticed a lot of reports we get from Tusky users have "Other" as the report reason, rather than "Spam", "Rule violation", etc.; even when the reason is very obvious.
Concept: Mastodon feature that, whenever first contact from a new instance is made, automatically limits it if sign-ups are open and unrestricted.
Me in 2019: Let’s run an InfoSec and a regional Mastodon instance - Should be FUN!
Me today: Discussing whether reported accounts are russian troll bots and how far these election-influencing operations would go. “Do you really think they use a (virtual) iOS device, tunnel that to Hamburg, and then activate the Apple privacy network..?”
Good Luck to all the #mastomod and #mastoadmin !!!
There is a big fucking bug in Mastodon's moderation features.
When I as a moderator suspend a remote account...
Expected behavior: The account can no longer be found or interacted with on our server and the account can't find or interact with accounts on our server.
Actual behavior: the account can still be found but is empty. there is a notice that the account is suspended and there is even still a follow button - it just doesn't do anything.
And the account can still see everything from our instance and interact with our posts. Our users just won't reciprocate the interaction, because the reply won't appear here.
But if someone else replies to a reply from the suspended account we'll see that reply, but won't know what original post it belongs to.
Is this a new bug with Mastodon 4.3 or has it always been like that?
I've been moderating for 5 years now and this is new to me.
I'm not going to create an account on Microsoft Github to officially file this bug report, but can someone tell me if it's already reported and what's the status? Thanks.
Additional ideal expected behavior: Previous replies to any of our posts by the suspended account won't be seen by anyone anywhere on the Fediverse - or at least not as replies to our posts, because our instance notifies/federates all other instances that these replies shouldn't be there.
With the recent again-spamwave, I'm seriously considering writing a little tool for myself to bundle the following:
It would only shave ~1 minute off of each report, but across hundreds of reports, that would save hours.
If you want to use a retractable block list for the current spam wave, use this script: https://mastodon.scot/@gunchleoc/111974599296575736
I'll provide the configuration in follow-up posts
IFTAS is hosting a list for the current spam wave: https://mastodon.iftas.org/@iftas/113273669404917189
And Seirdy has instructions for *oma instances: https://pleroma.envs.net/objects/a98a016a-0378-4620-bcf5-e0e6632a48f7
I've spoken at length over my belief that the fediverse is at a size where it can't survive without some sort of automated or semi-automated moderation tools (https://eldritch.cafe/@amberage/tagged/MastoMod).
It's basically email: tons of providers (instances) that send messages to each other (posts, follows, etc), and there is no guaranteed standard of moderation for all of them. Malicious actors can spin up their own or abuse existing ones.
We've seen that in the recent spam waves, i.e. "kuroneko" and the weekly spam waves from mastodon.social.
That is, in the long run, more than volunteers can handle. So there needs to be some automated solution to detect and contain spam.
The API does not support that at present.
It has to.