• 0 Posts
  • 170 Comments
Joined 7 months ago
cake
Cake day: November 22nd, 2023

help-circle
  • Because of the American Puritannical values, which dictate what the credit companies and advertisers are willing to do business with and the cultural zeitgeist along with it.

    The Puritans were some of the earliest British colonists in the US, and were either thrown out of England for attempting a coup to replace the king with a puppet to force their more extremist form of Christianity on the country, or left by their own choice because they felt that the Church of England was too liberal. They were basically a bunch of prudes who believed that the human body and sex were shameful and disgusting.

    This has led to the dichotomy where advertisers want nothing to do with sex/nudity, except when it comes to implied sex in advertisements. Because sex is bad, but it also sells, which is good.




  • Welcome to the hypocritical world of Puritan culture.

    Some of the earliest British settlers in the US were so extremist that the Church of England kicked them out after they tried to assassinate the king and replace him with a puppet of their own to force their beliefs on the rest of the country.

    It was partly these crazies that started the whole sex and bodies=bad and shameful thing in the US that advertisers still believe in today. And swearing is yet another of those weird things. But sex sells, so it’s okay to imply it as long as it’s selling a product and no other time.









  • That’s what I was thinking. Apart from the porn locked up in the Disney vault, big companies aren’t in the business of making porn. And the companies that do aren’t going to be interested in deep fakes. The people who are using Photoshop to create porn are small fries to Adobe. Deep fake porn has been around as long as photo manipulation has, and Adobe hasn’t cared before.

    Bearing that in mind, I don’t think this policy has anything to do with AI deep fakes or porn. I think it’s more likely to be some new revenue source, like farming data for LLM training or something. They could go the Tumblr route and use AI to censor content, but considering Tumblr couldn’t tell the difference between the Sahara Desert and boobs, I think that’s one fuck up with a major company away from being litigation hell. The only reason that I think would make sense for Adobe to do this because of deep fakes is if they believe that governments are going to start holding them liable for the content people make with their products.