• carrion0409@lemm.ee
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    14
    ·
    3 days ago

    This is the type of shit that radicalizes me against generative AI. It’s done so much more harm than good.

    • BaseModelHuman@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      11
      ·
      3 days ago

      The craziest thing to me is there was movements to advocate the creation of CP through AI to help those addicted to it as it “wasn’t real” and there were no victims involved in it. But no comments regarding how the LLM gained the models to generate those images or the damages that will come when such things get normalized.

      It just should never be normalized or exist.

      • Chozo@fedia.io
        link
        fedilink
        arrow-up
        37
        ·
        3 days ago

        Just for what it’s worth, you don’t need CSAM in the training material for a generative AI to produce CSAM. The models know what children look like, and what naked adults look like, so they can readily extrapolate from there.

        The fact that you don’t need to actually supply any real CSAM to the training material is the reasoning being offered for supporting AI CSAM. It’s gross, but it’s also hard to argue with. We allow for all types of illegal subjects to be presented in porn; incest, rape, murder, etc. While most mainstream sites won’t allow those types of material, none of them are technically outlawed - partly because of freedom of speech and artistic expression and yadda yadda, but also partly because it all comes with the understanding that it’s a fake, made-for-film production and that nobody involved had their consent violated, so it’s okay because none of it was actually rape, incest, or murder. And if AI CSAM can be made without actually violating the consent of any real people, then what makes it different?

        I don’t know how I feel about it, myself. The idea of “ethically-sourced” CSAM doesn’t exactly sit right with me, but if it’s possible to make it in a truly victimless manner, then I find it hard to argue outright banning something just because I don’t like it.

        • PM_Your_Nudes_Please@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          The fact that you don’t need to actually supply any real CSAM to the training material is the reasoning being offered for supporting AI CSAM. It’s gross, but it’s also hard to argue with.

          Yeah, this is basically the crux of the issue. When you get into the weeds and start looking at more than just surface-level “but it needs CSAM to make CSAM” misconception, arguments against it basically boil down to “but it’s icky.” Which… Yeah. It is. But should something being icky automatically make it illegal, even if there are no victims?

          I hate to make the comparison (for a variety of reasons) but until fairly recently homosexuality was psychologically classed as a form of destructive/dangerous kink. Largely because straight people had the same “but it’s icky” response whenever it got brought up. And we have tried to move away from that as time has passed, because we have recognized that being gay is not just a kink, it’s not just a choice, and it’s not inherently dangerous or harmful.

          To contrast that, pedophilia has remained stigmatized. Because even if it passed the first two “it’s not just a kink/choice” tests, it still failed the “it’s not harmful” test. Consuming CSAM was inherently harmful, and always had a victim. There was no ethical way to view CSAM. But now with AI, it can actually begin passing that third test as well.

          I don’t know how I feel about it, myself. The idea of “ethically-sourced” CSAM doesn’t exactly sit right with me, but if it’s possible to make it in a truly victimless manner, then I find it hard to argue outright banning something just because I don’t like it.

          This is really the biggest hurdle. To be clear, I’m not arguing that being an active pedo should be decriminalized. But it is worth examining whether we’re basing criminality purely off of the instinctual “but it’s icky” response that the public has when it gets discussed. And is that response enough of a justification for making/keeping it illegal? And if your answer to that was “yes”, what if it could help pedos avoid consuming real CSAM, and therefore reduce the number of future victims? If it could legitimately help reduce the number of victims but you still want to criminalize it, then you are not actually focused on reducing harm; You’re focused on feeling righteous instead. The biggest issue right now is that harm reduction is very hard to study, because it is such a taboo topic. Even finding subjects to self-report is difficult or impossible. So we’ll have no idea what kinds of impacts on CSAM consumption (positive or negative) AI will realistically have until after it is widely available.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          ·
          3 days ago

          Its a very difficult subject, both sides have merit. I can see the “CSAM created without abuse could be used in treatment/management of people with these horrible urges” but I can also see “Allowing people to create CSAM could normalise it and lead to more actual abuse”.

          Sadly its incredibly difficult for academics to study this subject and see which of those two is more prevalent.

        • CptBread@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          3 days ago

          There is also the angle of generated CSAM looking real adding difficulty in prosecuting real CSAM producers.

          • PM_Your_Nudes_Please@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            This is actually why I’d be in favor of AI generators creating a hash database of their generated images. If legalized, they should be required to maintain records of the images they have produced. So that if those images appear elsewhere, they can be verified as AI generated.

            It would be a monumental effort to actually get the AI companies to agree to it willingly. But that’s why legislation exists.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            1
            ·
            2 days ago

            This, above any other reason, is why I’m most troubled with AI CSAM. I don’t care what anyone gets off to if no one is harmed, but the fact that real CSAM could be created and be indistinguishable from AI created, is a real harm.

            And I instinctively ask, who would bother producing it for real when AI is cheap and harmless? But people produce it for reasons other than money and there are places in this world where a child’s life is probably less valuable than the electricity used to create images.

            I fundamentally think AI should be completely uncensored. Because I think censorship limits and harms uses for it that might otherwise be good. I think if 12 year old me could’ve had an AI show me where the clitoris is on a girl or what the fuck a hymen looks like, or answer questions about my own body, I think I would’ve had a lot less confusion and uncertainty in my burgeoning sexuality. Maybe I’d have had less curiosity about what my classmates looked like under their clothes, leading to questionable decisions on my part.

            I can find a million arguments why AI shouldn’t be censored. Like, do you know ChatGPT can be convinced to describe vaginal and oral sex in a romantic fiction is fine, but if it’s anal sex, it has a much higher refusal rate? Is that subtle anti-gay encoding in the training data? It also struggles with polyamory when it’s two men and a woman but less when it’s two women and a man. What’s the long-term impact when these biases are built into everyday tools? These are concerns I consider all the time.

            But at the end of the day, the idea that there are children out there being abused and consumed and no one will even look for them because “it’s probably just AI” isn’t something I can bear no matter how firm my convictions are about uncensored AI. It’s something I struggle to reconcile.

            • Ænima@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Maybe the weird, extra human finger and appendage issues in AI images are a feature, not bugs. Maybe it’s a naturally occurring, unintended consequences of their learning and feedback process to sabotage the output they generate in order to make it obvious the image is fake.

              /s (sort of)

          • Eggyhead@fedia.io
            link
            fedilink
            arrow-up
            3
            ·
            2 days ago

            I’d say that’s more of an AI industry issue than anything else. All AI art needs to be easily identifiable and sourced as such, but I doubt AI producers will want to hide tags on all their AI generated work though.

      • Brainsploosh@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        3 days ago

        Nuanced take coming, take a breath:

        I agree that Child Sexual Abuse is a horrible practice along with all other violence and oppression, sexual or not. But the attraction de facto exists and has done for thousands of years, even through intense taboos. It seems our current strategy of shaming and ignoring it has been ineffective. The definition of insanity being repeating the same thing expecting different results and all that.

        Short of eugenics (and from previous trials maybe not even then) we might not be able to get rid of it.

        So when do we try other ways of dealing with it?

        I’m not saying generative AI is the solution, but I’m pretty sure denying harder isn’t it.

          • Brainsploosh@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            1
            ·
            edit-2
            2 days ago

            Sexuality is often treated as more complex a topic than emotions, but I found a similar meta-study The role of conditioning, learning and dopamine in sexual behavior: A narrative review of animal and human studies, 2014 concluding that conditioning and associative learning does occur around sexuality and can be used as basis for treatment.

            From other sources I’ve read, there’s so many influences going into sexuality that it’s impossible to see how it develops, but from a layman’s perspective I’d agree that not reinforcing child abuse probably makes it more rare.

            My remaining issue is that with such a simplistic view, any non-normative sexuality can/should be conditioned away. We already have the abusive gay conversion camps, should we go back to do the same with polygamy, bdsm, porn? How much should fashion dictate what sexuality is allowed?

            (Roman style orgies seem to have faded in popularity, but tantra and swinging seems to have risen lately, which should we be conditioning away? Who decides?)

      • rice@lemmy.org
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        3 days ago

        Probably got all the data to train for it from the pentagon. They’re known for having tons of it and a lot of their staff (more than 25%) are used to seeing it frequently.

        Easily searchable, though I don’t like to search for that shit, but here’s 1 post if you literally add pentagon to c____ p___ in a search a million articles on DIFFERENT subjects (than this house bill) come up https://thehill.com/policy/cybersecurity/451383-house-bill-aims-to-stop-use-of-pentagon-networks-for-sharing-child/

            • swelter_spark@reddthat.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              All laptops are supposed to be formatted and have the necessary software freshly installed before being assigned to someone. Either it wasn’t wiped by accident, or the person whose job it was found the CP and left it, hoping my dad would report it. He deleted it, though, because was afraid he’d be blamed.

      • carrion0409@lemm.ee
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        6
        ·
        edit-2
        3 days ago

        Anything like that involving children or child like individuals is a hard fucking no from me. It’s like those mfs who have art of a little anime girl and go “actually shes a 5000 year old vampire.” They know exactly what the fuck they’re doing. I also hate the argument of “it’s not real” like mf the sentiment is still there.