Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: ‘It has destroyed me completely.’::Moderators told The Guardian that the content they reviewed depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

  • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    2
    ·
    edit-2
    1 year ago

    He said that many of these passages centered on sexual violence and that the work caused him to grow paranoid about those around him. He said this damaged his mental state and his relationship with his family.

    Another former moderator, Alex Kairu, told the news outlet that what he saw on the job “destroyed me completely.” He said that he became introverted and that his physical relationship with his wife deteriorated.

    The moderators told The Guardian that the content up for review often depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

    A Sama spokesperson told the news outlet that workers were paid from $1.46 to $3.74 an hour. Time previously reported that the data labelers were paid less than $2 an hour to review content for OpenAI.

    Sam deserves to be sued to bankruptcy at this point.

  • QubaXR@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    1 year ago

    It’s the same story with pretty much any platform. While you may know YouTube as the tool where AI/software randomly bans innocent channels and demonetizes videos based on false positives, it actually has a small army of human moderators. These poor folks are constantly exposed to some of the most vile content humankind produces >! ::: beheadings, rape, torture, child abuse::: !< etc.

    I once worked on a project aiming to help their mental well-being, but honestly, while a step in the right direction, I don’t think it made much difference.

    Edit: attempting to nest multiple spoiler formatting

    • fubo@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 year ago

      >!beheadings, rape, torture, child abuse!&lt;

      That’s not how spoiler markup works here.

      It works this way instead.

      yo yo dirty stuff here


      Edited to add: Apparently there’s not exactly consensus across different interfaces for what spoiler markup is supposed to be. Aaaargh!!

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    1 year ago

    These would be the Kenyan moderators getting paid $2 an hour to go through that.

    But Sam Altman will save the world for sure.

  • Prater@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    1 year ago

    Aside from the pay in this case (which is obviously unacceptable) content moderation is just a terrible job in general, because with people being people, horrible stuff will always inevitably be uploaded to the internet. This is one job where I think it would actually be good to have AI take over when it reaches that point.