Are you seriously equating security software running on business systems with state violence / surveillance on people? Those two things are not even remotely comparable, starting with business systems not being people that have rights
Are you seriously equating security software running on business systems with state violence / surveillance on people? Those two things are not even remotely comparable, starting with business systems not being people that have rights
Except if you continue reading beyond your Quote, it goes on to explain why that actually doesn’t help.
Companies and their legal departments do care though, and that’s where the big money lies for Microsoft when it comes to Windows
Training and fine tuning happens offline for LLMs, it’s not like they continuously learn by interacting with users. Sure, the company behind it might record conversations and use them to further tune the model, but it’s not like these models inherently need that
Happened with Lone Echo for me. It’s a VR game where you’re in a space station, and you move around in zero g by just grabbing your surroundings and pulling yourself along or pushing yourself off of them. I started reflexively attempting to do that in real life for a bit after longer sessions
HTTP is not Google-controlled, you don’t need to replace that in order to build something new without Google
There’s also this part:
But Johansson’s public statement describes how they tried to shmooze her: they approached her last fall and were given the FO, contacted her agent two days before launch to ask for reconsideration, launched it before they got a response, then yanked it when her lawyers asked them how they made the voice.
Which is still not an admission of guilt, but seems very shady at the very least, if it’s actually what happened.
Except discord is not an ads-based platform? I’ve never seen a third party ad on there
where anyone thinks it’s ok or normal to recommend suicide to people
Except that’s already happening even without it being normalized, there have always been assholes that are gonna tell people to kill themselves, especially if they’ve never seen the person they’re talking to before. I don’t see how this is any different.
Literally the whole thing would not have happened without the policy.
It also wouldn’t have happened if a fucked up system wasn’t withholding actual, reasonable alternatives that the person was clearly asking for. That’s my point. Let’s fix the actual problems, rather than try to silence the symptoms.
…and did you notice how everyone was outraged by that? That incident was not an issue with assisted suicide being available, that was an issue with fucked up systems withholding existing alternatives and a tone-deaf case worker (who is not a doctor) handling impersonal communications. Maybe it’s also an issue with this kind of thing being able to be decided by a government worker instead of medical and psychological professionals. But definitely nothing about this would have been made better by assisted suicide not being generally available for people who legitimately want it, except the actual problem wouldn’t have been put into the spotlight like this.
I don’t want to create a future where, “I’ve tried everything I can to fix myself and I still feel like shit,” is met with a polite and friendly, “Oh, well have you considered killing yourself?”
Are you for real? This kind of thing is a last resort that nobody is going to just outright suggest unprompted to a suffering person, unless that person asks for it themselves. No matter how “normalized” suicide might become, it’s never gonna be something doctors will want to recommend. That’s just… Why would you even think that’s what’s gonna happen
Except the email in question is not a newsletter. Companies often use separate mail list services for important product announcements and similar things as well. Obviously there should be a process in place that removes you from these external services too when you delete your account, but I assume this is what broke down in this case
It’s not quite that simple, though. GDPR is only concerned with personally identifiable information. Answers and comments on SO rarely contain that kind of information as long as you delete the username on them, so it’s not technically against GDPR if you keep the contents.
And science fiction somehow can’t be fascist?
I was thinking of an approach based on cryptographic signatures. If all images that come from a certain AI model are signed with a digital certificate, you can tamper with metadata all you want, you’re not gonna be able to produce the correct signature to add to an image unless you have access to the certificate’s private key. This technology has been around for ages and is used in every web browser and would be pretty simple to implement.
The only weak point with this approach would be that it relies on the private key not being publicly accessible, which makes this a lot harder or maybe even impossible to implement for open source models that anyone can run on their own hardware. But then again, at least for what we’re talking about here, the goal wouldn’t need to be a system covering every model, just one that makes at least a couple models safe to use for this specific purpose.
I guess the more practical question is whether this would be helpful for any other use case. Because if not, I hardly doubt it’s gonna be implemented. Nobody is gonna want the PR nightmare of building a feature with no other purpose than to help pedophiles generate stuff to get off to “safely”, no matter how well intentioned
Yeah but the point is you can’t easily add it to any picture you want (if it’s implemented well), thus providing a way to prove that the pictures were created using AI and no harm has been done to children in their creation. It would be a valid solution to the “easy to hide actual CSAM between AI generated pictures” problem.
AI is just impossibly far away.
Sure it’s pretty far away, but it’s also moving at break neck speed. Last year low-res spaghetti-eating Will Smith body horror was the pinnacle of ai generated video, today we’re already generating videos that take at least a second look to determine that it was AI generated. The big question is at what point that improvement rate will start to level off.
I mean… It might be. Just depends on how much potential there still is to get models up to higher reasoning capabilities, and I don’t think anyone really knows that yet
That’s already happening. Slightly different example, but Home Assistant has an integration that gives an LLM of your choice control over your home automation devices. Just talking to your home in natural language without having to memorize very specific phrases is honestly pretty powerful, as long as it works correctly. You can say stuff like “hey it’s a bit dark in the office”, and it just knows to either switch on the office lights, or make them brighter if they’re already on