• stravanasu@lemmy.ca
    link
    fedilink
    English
    arrow-up
    53
    ·
    edit-2
    1 year ago

    Title:

    ChatGPT broke the Turing test

    Content:

    Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test. […]

    researchers […] reported that more than 1.5 million people had played their online game based on the Turing test. Players were assigned to chat for two minutes, either to another player or to an LLM-powered bot that the researchers had prompted to behave like a person. The players correctly identified bots just 60% of the time

    Complete contradiction. Trash Nature, it’s become only an extremely expensive gossip science magazine.

    PS: The Turing test involves comparing a bot with a human (not knowing which is which). So if more and more bots pass the test, this can be the result either of an increase in the bots’ Artificial Intelligence, or of an increase in humans’ Natural Stupidity.

  • Peanut@sopuli.xyz
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    1 year ago

    Funny I don’t see much talk in this thread about Francois Chollet’s abstraction and reasoning corpus, which is emphasised in the article. It’s a really neat take on how to understand the ability of thought.

    A couple things that stick out to me about gpt4 and the like are the lack of understanding in the realms that require multimodal interpretations, the inability to break down word and letter relationships due to tokenization, lack of true emotional ability, and similarity to the “leap before you look” aspect of our own subconscious ability to pull words out of our own ass. Imagine if you could only say the first thing that comes to mind without ever thinking or correcting before letting the words out.

    I’m curious about what things will look like after solving those first couple problems, but there’s even more to figure out after that.

    Going by recent work I enjoy from Earl K. Miller, we seem to have oscillatory cycles of thought which are directed by wavelengths in a higher dimensional representational space. This might explain how we predict and react, as well as hold a thought to bridge certain concepts together.

    I wonder if this aspect could be properly reconstructed in a model, or from functions built around concepts like the “tree of thought” paper.

    It’s really interesting comparing organic and artificial methods and abilities to process or create information.

    • Zapp@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      “At Viridian Dynamics, we build our robots with ethical AI, whatever that means; so that humans and androids can live in peace - we hope.”

  • bedrooms@kbin.social
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    Honestly, though, I even can’t decide whether other people have consciousness. Cogito ergo sum, if you know what I’m talking about.

    • Droggl@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I dont remember the numbers but iirc it was covered by one of the validation datasets and GPT 4 did quite well on it

      • Maestro@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Yeah, but did it do well on the specific examples from the Winograd paper? Because ChatGPT probably just learned those since they are well known and oft repeatef. Or does it do well on brand new sentences made according to the Winograd scheme?

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      The Chinese room argument makes no sense to me. I cant see how its different from how young children understand and learn language.

      My 2 year old sometimes unmistakable start counting when playing. (Countdown for lift off) Most numbers are gibberish but often he says a real number in the midst of it. He clearly is just copying and does not understand what counting is. At some point though he will not only count correctly but he will also be able to answer math questions. At what point does he “understand” at what point would you consider that chatgpt “understands”  There was this old tv programm where some then ai experts discussed the chinese room but they used a chinese restaurant for a more realistic setting. This ended with “So if i walk into a chinese restaurant, pick sm out on the chinese menu and can answer anything the waiter may ask, in chinese. Do i know or understand chinese? I remember the parties agreeing to disagree at that point.

      • conciselyverbose@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        ChatGPT will never understand. LLMs have no capacity to do so.

        To understand you need underlying models of real world truth to build your word salad on top of. LLMs have none of that.

        • Ferk@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Note that “real world truth” is something you can never accurately map with just your senses.

          No model of the “real world” is accurate, and not everyone maps the “real world truth” they personally experience through their senses in the same way… or even necessarily in a way that’s really truly “correct”, since the senses are often deceiving.

          A person who is blind experiences the “real world truth” by mapping it to a different set of models than someone who has additional visual information to mix into that model.

          However, that doesn’t mean that the blind person can “never understand” the “real world truth” …it just means that the extent at which they experience that truth is different, since they need to rely in other senses to form their model.

          Of course, the more different the senses and experiences between two intelligent beings, the harder it will be for them to communicate with each other in a way they can truly empathize. At the end of the day, when we say we “understand” someone, what we mean is that we have found enough evidence to hold the belief that some aspects of our models are similar enough. It doesn’t really mean that what we modeled is truly accurate, nor that if we didn’t understand them then our model (or theirs) is somehow invalid. Sometimes people are both technically referring to the same “real world truth”, they simply don’t understand each other and focus on different aspects/perceptions of it.

          Someone (or something) not understanding an idea you hold doesn’t mean that they (or you) aren’t intelligent. It just means you both perceive/model reality in different ways.

          • @Barbarian772 I don’t have to. It’s the ChatGPT people making extremely strong claims about equivalence of ChatGPT and human intelligence. I merely demand proof of that equivalence. Which they are unable to provide, and instead use rhetoric and parlor tricks and a lot of hand waving to divert and distract from that fact.

            • Barbarian772@feddit.de
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              GPT 4 is already more intelligent than the average human. Is it more intelligent than the most intelligent human? No, but most humans aren’t either. Can it create new knowledge? No, but the average human can’t either.

              How can you say it isn’t intelligent?

              • @Barbarian772 no, GTP is not more “intelligent” than any human being, just like a calculator is not more “intelligent” than any human being — even if it can perform certain specific operations faster.

                Since you used the term “intelligent” though, I would ask for your definition of what it means? Ideally one that excludes calculators but includes human beings. Without such clear definition, this is, again, just hand-waving.

                I wrote about it in a bit longer form:
                https://rys.io/en/165.html

                • Barbarian772@feddit.de
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  I think the Wikipedia definition is fine https://en.m.wikipedia.org/wiki/Intelligence. Excluding AI just because it’s AI is imo plain stupid and goes against all scientific principles.

                  I have definitely met humans that are less intelligent that Chat GPT. It can hold a conversation and ace every standardized test we have. It finished law exams, medical exams and other exams from many different countries with a passing grade.

                  Can you give me a definition of intelligence that excludes Chat GPT and includes all human beings? And no just excluding Computers for the sake of it doesn’t count.