

They definitely prefer to spend their money on development, rather than adding safeguards
I don’t believe people misusing ChatGPT helps them in any way, it’s just that adding protections has a cost
but they aren’t able to tell when a child is at risk and report it as well?
Maybe police actually sorts and filters manually reports, but doesn’t want to bother with mental health things? You know how the USA works, I don’t believe OpenAI will go too far, they’ll just randomly report.
Might even be reported for all I know, sometimes I just like to see the reaction of LLMs when I say I’ll commit horrible stuff like school shootings or terrorism. The NSA will just feed it into their mass spying algorithm to check the most important profiles and this will be it
The war on drugs is so much more important than mental health detection, y’know. It sells more.
Coming soon: McDonalds get sued for selling burgers to a minor that ate 3 burgers every day and died! McDonalds must set thresholds per customer and collect IDs from minors!
I would be for holding companies responsible when they fuck up, like McDonalds clearly marketing burgers to minors and saying it’s healthy, but we must not hold them more accountable than they really are
Soon: huggingface gets sued because hosted models are being used for getting drug recipes, or did not actively prevent people from killing themselves through it
By the way, I was recently testing https://nano-gpt.com/ which claims to have privacy through TEE models… but I don’t see how it’s private in any way? It just guarantees that the output went through some TEE, but it doesn’t guarantee that the input and output didn’t leak elsewhere or got logged