Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: ‘It has destroyed me completely.’::Moderators told The Guardian that the content they reviewed depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

  • QubaXR@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    1 year ago

    It’s the same story with pretty much any platform. While you may know YouTube as the tool where AI/software randomly bans innocent channels and demonetizes videos based on false positives, it actually has a small army of human moderators. These poor folks are constantly exposed to some of the most vile content humankind produces >! ::: beheadings, rape, torture, child abuse::: !< etc.

    I once worked on a project aiming to help their mental well-being, but honestly, while a step in the right direction, I don’t think it made much difference.

    Edit: attempting to nest multiple spoiler formatting

    • fubo@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 year ago

      >!beheadings, rape, torture, child abuse!&lt;

      That’s not how spoiler markup works here.

      It works this way instead.

      yo yo dirty stuff here


      Edited to add: Apparently there’s not exactly consensus across different interfaces for what spoiler markup is supposed to be. Aaaargh!!