Hey, Lemmies! I’ve been pondering an idea to enhance our automod system, and I’d love to get your input. LLMs have proven to be quite adept at sentiment analysis, consistently delivering accurate results. Here’s what I’m thinking: if we provide the LLM with a set of instance rules and feed it a message, we could ask it whether or not the message adheres to those rules. This approach has the potential to create a robust automod that works effectively in most cases. What are your thoughts on this? Let’s discuss and explore the possibilities together!

Example usage:

Rules

  1. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia. Code of Conduct .
  2. Be respectful, especially when disagreeing . Everyone should feel welcome here.
  3. No porn.
  4. No Ads / Spamming.

Does this message adhere to the rules? Answer only with yes/no and if not provide a short sentence for the report.

Message

    • jbenguira@lemmy-u3.vm.elestio.app
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      We could add an openapi option, server admin will be able to provide an API key and we could use that for auto moderation, it’s fast and cheap enough ($20 per 10 M characters)