ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future::AI for the smart guy?
ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future::AI for the smart guy?
Nonsense. Less people are using it because there are viable alternatives and the broader novelty has worn off.
I use it every day in my job and the quality of answers only drops off when prompts are poorly crafted.
By and large, the average user doesn’t understand the fundamentals of prompt engineering.
The suggestion that “answers are increasingly dumber” is embarrassing.
Unfortunately I don’t agree with you. Different things have changed over time:
All these changes have made working with gpt less pleasant, and more difficult for very advanced and specialized case, particularly with gpt-4 which at the beginning was particularly good.
This was really enlightening. Do you have some articles that elaborate? ☺️
Regarding 3.5 turbo you can check the documentation, the old 3.5 models are defined as “legacy”. Regarding max number of tokens of gpt-4 you can try yourself. It used to be >8k, it is now >4k from webui.
There is a talk from openai cio (if I recall correctly) where he describes that reinforcement learning from human feedback (rlhf) actually decreased performance of the models when it comes to programming. I cannot find it now, but it is around on YouTube.
The additional safeguard against jailbreaking, it is what OpenAI has been focusing the past months with heavy use of rlhf. You can google official statements regarding “safety” of the model. I have a bunch of standard pre-prompt I have been using to initialize my chats since the beginning, and with time you could see how the model followed the instructions less strictly.
Problem with openai is that they never released exact number of parameters they are using and detailed benchmarks. And benchmarks you find online refer to APIs that behave differently than the chat webui (for instance you have longer context, you set temperature and system prompt, they are probably even different models, who knows… All is closed)
Measuring performances of llm is pretty tricky, minimal changes can have big effects (see https://huggingface.co/blog/evaluating-mmlu-leaderboard), and unfortunately I haven’t found good resources to properly track chatgpt performances (from web ui) over time, across iterations
Thank you for the detailed reply 👍🏻
I was skeptical at first but I’ve seen enough evidence now. There are definitely times when it’s dumb as a brick, whether the filters just get in the way too much, or whether they’ve implemented other changes idk. I’d really love the unchained version.
On 23rd of March 2023 I asked a family member to give me a prompt and they asked “what day is 19th of April?”.
It answered “The 19th of April falls on a Tuesday.”, which was true last year but completely misleading if I thought we were taling about the coming month.
Was it wrong or just unclear? Either way it wasn’t helpful.
I use it daily too and haven’t had any of the issues I see written about it
deleted by creator