Kids “easily traceable” from photos used to train AI models, advocates warn.
I mean, that’s true, and could be a perfectly-legitimate privacy issue, but that seems like an issue independent of training AI models. Like, doing facial recognition and such isn’t really new.
Stable Diffusion or similar generative image AI stuff is pretty much the last concern I’d have over a photo of me. I’d be concerned about things like:
Automated inference of me being associated with other people based on facial or other recognition of us together in photos.
Automated tracking using recognition in video. I could totally see someone like Facebook or Google, with a huge image library, offering a service to store owners or something to automatically identify potential shoplifters if they let them run automated recognition on their store stuff. You could do mass surveillance of a whole society once you start connecting cameras and doing recognition.
I’m not really super-enthusiastic about use of fingerprint data for biometrics, since I’ve got no idea how far that is traveling. Not the end of the world, probably, but if you’ve been using, say, Google or Apple automated fingerprint unlocking, I don’t know whether they have enough data to forge a thumbprint and authenticate as you wherever else. It’s a non-revocable credential.
Like, I feel that there are very real privacy issues associated with having a massive image database, and that those may have been ignored. It just…seems a little odd that people would ignore all that, and then only have someone write about it when it comes to running an LLM on it, which is pretty limited in actual issues that I’d have.
And all that aside, let’s say that someone is worried about someone generating images of 'em with an LLM.
Even if you culled photos of kids from Stable Diffusion’s base set, the “someone could generate porn” concern in the article isn’t addressed. Someone can build their own model or – with less training time – a LoRA for a specific person.
kagis
Here’s an entire collection of models and LoRAs trained on a particular actress on Civitai. The Stable Diffusion base model doesn’t have them, which is exactly why people went out and built their own. And “actress” alone isn’t gonna be every model trained on a particular person, just probably a popular one.
And that is even before you get to various techniques that start with a base image of a person, do no training on that image at all, and then try to generate surrounding parts of the image using a model.
I mean, that’s true, and could be a perfectly-legitimate privacy issue, but that seems like an issue independent of training AI models. Like, doing facial recognition and such isn’t really new.
Stable Diffusion or similar generative image AI stuff is pretty much the last concern I’d have over a photo of me. I’d be concerned about things like:
Automated inference of me being associated with other people based on facial or other recognition of us together in photos.
Automated tracking using recognition in video. I could totally see someone like Facebook or Google, with a huge image library, offering a service to store owners or something to automatically identify potential shoplifters if they let them run automated recognition on their store stuff. You could do mass surveillance of a whole society once you start connecting cameras and doing recognition.
I’m not really super-enthusiastic about use of fingerprint data for biometrics, since I’ve got no idea how far that is traveling. Not the end of the world, probably, but if you’ve been using, say, Google or Apple automated fingerprint unlocking, I don’t know whether they have enough data to forge a thumbprint and authenticate as you wherever else. It’s a non-revocable credential.
Like, I feel that there are very real privacy issues associated with having a massive image database, and that those may have been ignored. It just…seems a little odd that people would ignore all that, and then only have someone write about it when it comes to running an LLM on it, which is pretty limited in actual issues that I’d have.
And all that aside, let’s say that someone is worried about someone generating images of 'em with an LLM.
Even if you culled photos of kids from Stable Diffusion’s base set, the “someone could generate porn” concern in the article isn’t addressed. Someone can build their own model or – with less training time – a LoRA for a specific person.
kagis
Here’s an entire collection of models and LoRAs trained on a particular actress on Civitai. The Stable Diffusion base model doesn’t have them, which is exactly why people went out and built their own. And “actress” alone isn’t gonna be every model trained on a particular person, just probably a popular one.
https://civitai.com/tag/actress
And that is even before you get to various techniques that start with a base image of a person, do no training on that image at all, and then try to generate surrounding parts of the image using a model.