Hey all,
I’m the author of lemmyverse.net and I’ve recently been working on a new moderation tool called Lemmy Modder. https://modder.lemmyverse.net/
Currently, it supports user registration/approvals and content report management. I offer it either as a hosted app (which is currently only compatible with Lemmy 0.18 instances) or a package that you can run alongside your Lemmy instance (using Docker-Compose)
Feel free to give it a go and send any feedback my way :) https://github.com/tgxn/lemmy-modder
Edit for a note: This tool does not save, proxy or store any of your user credentials or data to me, it is only ever stored locally in your browser. I also do not use any website tracking tools. 👍
My approach would be to add certain keywords to the report message (“porn”, “CSAM”, “child” in various languages, or make it a textbox where mods can enter words themselves) and hide the image by default when the message contains any of those, with a click-to-expand feature to actually verify reports.
I don’t think relying on the NSFW flag is very useful against abuse, though it could be a good feature for moderators of instances that have NSFW communities.
You could also control the behaviour by a setting, just in case.
Yeah, I’ve experienced this. It’d be pretty easy to add a report-contains-word based filter for the images, but if content doesn’t match this then it’d still be an issue.
I’ve created a bug for myself, https://github.com/tgxn/lemmy-modder/issues/63 If you wanted to add any additional information or track my progress. :D
I also want to add an in-app popup (with the image/website) when you click the content directly instead of navigating to the actual content in a new tab, so this could get combined to only show content on click (as a configuration, defaulted to only show on “expand”).