• fearout@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Yep, definitely. I have a plus subscription, and stuff that was easy for it just a few months ago now seems to take several back-and-forths to barely approach similar results.

    Science content is where I noticed the most degradation. It just stares at me using blank “it’s not in my training data” answers to questions that used to have comprehensive responses a while ago.

    I think they’re scaling down the models to make them cheaper to run?

    • NXTR@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      They’re definitely reducing model performance to speed up responses. ChatGPT was at its best when it took forever to write out a response. Lately I’ve noticed that ChatGPT will quickly forget information you just told it, ignore requests, hallucinate randomly, and has a myriad of other problems I didn’t have when the GPT-4 model was released.