Report finds newer inferential models hallucinate nearly half the time while experts warn of unresolved flaws, deliberate deception and a long road to human-level AI reliability
My point is that the rate of improvement is slowing down. Also, its capabilities are often overblown. On the surface it does something amazing, but then flaws are pointed out by those who have a better understanding of the subject matter, then those flaws are excused with fluff words like “hallucinations”.
All it needs to do is produce less flaws than the average human. It’s already passed that mark for many general use cases (which many people said would never happen). The criticism is now moving to more and more specialized work, but the AI continues to improve in those areas as well.
My point is that the rate of improvement is slowing down. Also, its capabilities are often overblown. On the surface it does something amazing, but then flaws are pointed out by those who have a better understanding of the subject matter, then those flaws are excused with fluff words like “hallucinations”.
All it needs to do is produce less flaws than the average human. It’s already passed that mark for many general use cases (which many people said would never happen). The criticism is now moving to more and more specialized work, but the AI continues to improve in those areas as well.