Absolutely this. LLM basically is trained to be good at fooling us into thinking it is intelligent, and it is very good at it.
It doesn’t demonstrate how good it is in what it is doing, it demonstrates how easy it is to fool us.
My company provides copilot for software engineering and I use it in my IDE.
The problem is that it produces code that looks accurate, but it often isn’t. I frequently tend to disable it. I think it might help in area where I don’t know what I’m doing, so it can get some working code, but it is a double edged sword, because if I don’t know what I’m doing I will not be able to catch issues.
I also noticed that what it produces when correct, I can frequently write a simpler and shorter version that fits my use case. It looks very likely like code you see students put on GitHub when they post their homework assignment, and I guess that’s what it was trained on.
People who don’t know what they’re doing asking something that can’t reason to do something that neither of them understand. It’s like the dumbest realization of the singularity we could possibly achieve.
LLM basically is trained to be good at fooling us into thinking it is intelligent, and it is very good at it.
That’s a fascinating concept. An LLM is really just a specific kind of machine learning. Machine learning can be amazing. It can be used to create algorithms that can detect cancer, predict protein functions, or develop new chemical structures. An LLM is just an algorithm generated using machine learning that deceives people into thinking it’s intelligent. That seem like a very accurate description to me.
Absolutely this. LLM basically is trained to be good at fooling us into thinking it is intelligent, and it is very good at it.
It doesn’t demonstrate how good it is in what it is doing, it demonstrates how easy it is to fool us.
My company provides copilot for software engineering and I use it in my IDE.
The problem is that it produces code that looks accurate, but it often isn’t. I frequently tend to disable it. I think it might help in area where I don’t know what I’m doing, so it can get some working code, but it is a double edged sword, because if I don’t know what I’m doing I will not be able to catch issues.
I also noticed that what it produces when correct, I can frequently write a simpler and shorter version that fits my use case. It looks very likely like code you see students put on GitHub when they post their homework assignment, and I guess that’s what it was trained on.
And you pinpointed exactly the issue right there…
People who don’t know what they’re doing asking something that can’t reason to do something that neither of them understand. It’s like the dumbest realization of the singularity we could possibly achieve.
That’s a fascinating concept. An LLM is really just a specific kind of machine learning. Machine learning can be amazing. It can be used to create algorithms that can detect cancer, predict protein functions, or develop new chemical structures. An LLM is just an algorithm generated using machine learning that deceives people into thinking it’s intelligent. That seem like a very accurate description to me.