A lot of journalists, at least historically, wanted to do this. Unfortunately they’ve been more and more kneecapped over time by news companies either pushing for a bias, or for clicks.
A lot of journalists, at least historically, wanted to do this. Unfortunately they’ve been more and more kneecapped over time by news companies either pushing for a bias, or for clicks.
Stronger guardrails can help, sure. But getting new input and building a new model is the equivalent of replacing the entire vending machine with a different model by the same company if one is failing (by the old analogy).
The problem is that if you do the same thing with a llm for hiring or job systems, then the failure and bias instead is from the model being bigoted, which while illegal, is hidden in a model that is basically trained on how to be a more effective bigot.
You can’t hide your race from the llm that was accidentally trained to know what job histories are traditionally black, or anything else.
If I commission a vending machine, get one that was made automatically and runs itself, and I set it up and let it operate in my store, then I am responsible if it eats someone’s money without giving them their item, giving the wrong thing, or dispensing dangerous products.
This has already been decided, and it’s why you can open up and fix them, and each mechanism is controlled.
A llm making business decisions has no such control or safety mechanisms.
Having worked at ikea, this doesn’t stop people