I can appreciate that. Arguably these folks might be more likely to vote because they aren’t stuck in the mud of nuance, answers they see are more clear and obvious and the other ones may as well not exist. Not contemplation of what they don’t know, in a way.
But - on the other hand, as mentioned we can’t really pick who votes without opening Pandora’s Box - and the best thing we can do is not to punish, but to rehabilitate. To model stronger behaviours, to identify why they behave in this way, and to try to help them build stronger critical thinking skills. Punishment is polarizing.
Fun, maybe related note: I’ve researched some more classical AI approaches and took classes with some greats in the field whom are now my colleagues. One of which has many children who are absurdly successful globally, every one of them. He mathematically proved that (at least this form of AI) when you reward good behaviour and punish bad behaviour (correct responses, incorrect responses), the AI takes much longer to learn and spends a long time stuck on certain correct points and fails to, or takes a long time to, develop a varied strategy. If you just reward correct responses and don’t punish incorrect responses, the AI builds a much stronger model for answering a variety of questions. He said he applied that thinking to his kids, too, to what he considered a great success.
I think there’s something to that, and I’ve seen it in my own teaching, but the difficulty now has been getting students with this mindset to even try to get something correct or incorrect in the first place, so they just… Give up, or only kick into action after it’s too late and they don’t know how to handle it at that stage because they didn’t learn. Inaction is often the worst action, as it kills any hope of learning or building the skills of learning.
For me, it tracks, but the caveat is a high increase in burnout accumulation. No self regulation needed? No problem. Except when you can’t self regulate healthy work amounts / dealing with demands.