Also of note is that the UN OHCHR is also bluntly critical of austerity as a human rights abuse, due to the way it targets minority groups: https://www.ohchr.org/en/social-security/austerity-measures-and-right-social-security
Not mentioned is the way it helps established disabled people as a permanent underclass. We are simply less than human. In Australia, the more disabled you are, the more you’re exposed to being killed or maimed in an institution, or slightly “better” winding up homeless and exposed to violence and other crimes (if your state likes packing people into shelters like sardines) or the elements (if they don’t).
I’m not sure how the tech is progressing, but ChatGPT was completely dysfunctional as an expert system, if the AI field still cares about those. You can adapt the Chinese Room problem to whether a model actually has applicability outside of a particular domain (say, anything requiring guessing words on probabilities, or stabilising a robot).
Another problem is that probabilistic reasoning requires data. Just because a particular problem solving approach is very good at guessing words based on a huge amount of data from a generalist corpus, doesn’t mean it’s good at guessing in areas where data is poor. Could you comment on whether LLMs have good applicability as expert systems in, say, medicine? Especially obscure diseases, or heterogeneous neurological conditions (or both like in bipolar disorders and schizophrenia-related disorders)?