I’m a dev and I was browsing Mozilla’s careers page and came across this. I find a privacy respecting company being interested in building an AI powered recommendation engine a little odd. Wouldn’t they need to sift through the very data we want private in order for a recommendation engine to be good? Curious of what others think.
I don’t think it’s odd. They can use data, that they already have and make us of it.
Mozilla has a huge amount of information already submitted by volunteers to train their own specific-subject LLM.
And as we saw from Meta’s nearly ethical-consideration-devoid CM3Leon (no i will not pronounce it “Chameleon”) paper, you don’t need a huge dataset to train if you supplement with your own preconfigured biases. For better or worse.
Just because something is “AI-powered” doesn’t mean the training datasets have to be acquired without ethics. Even if there is something to be said for making material public and the inevitable consequences it can be used.
I hope whoever gets the job can help pave the way for ethics standards in AI research.
Ironically, this comment reads just like an AI wrote it.
The irony of AI-generated responses being difficult to distinguish from the rules educators harassed me to comply with is something I’ve found pretty amusing lately. It’s a bias built into the system, but has the opposite unintended effect of delegitimising actual human opinions. What an own-goal for civilisation.
I am regrettably all too human. I have even been issued hardware keys to prove it!
Yeeeeeah this is a little sus. AI is more about surveillance than actually providing a service but let them cook I guess? Maybe they’re just following the marketing trends and the trends say put AI somewhere in your ad copy to get people to look at you. Just another buzzword…I hope.