• agent_flounder@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Appreciate the detailed response!

    Indeed, intelligence is …a difficult thing to define. It’s also a fascinating area to ponder. The reason I asked was to get an idea of where your head is at with the claims you made.

    Now, I admit I haven’t done a lot with gpt-4 but your comments make me think it is worth the time to do so.

    So you indicate gpt-4 can reason. My understanding is gpt-4 is an LLM, basically a large scale Markov chain, trained to respond with appropriate output based on input (questions).

    On the one hand, my initial reaction is: no, it doesn’t reason it just mimics or simulates human reasoning that came before it in text form.

    On the other hand, if a program could perfectly simulate whatever processes are involved in reasoning by a human to the point that they’re indistinguishable, is it not, in effect, reasoning? (I suppose this amounts to a sort of Turing Test but for reasoning exercises).

    I don’t know how gpt4 LLMs work yet. I imagine, being a Markov Model (specifically a Markov Chain), if the model is trained on human language then the underlying semantics are sort of implicitly captured in the statistical model. Like, simplistically, if many sentences reflect human knowledge that cars are vehicles and not animals then it’s statistically unlikely for anyone to write about attributes and actions of animals when talking about cars. I assume the LLM is of such a scale that it permits this apparently emergent behavior.

    I am skeptical about judgement calls. I would think some sensory input would be required. I guess we have to outline various types of judgement calls to really dig into this.

    I am willing to accept that gpt-4 simulates the portions of the brain that deal with semantics and syntax both the receiving and transmitting abilities. And, maybe to some degree, knowledge and understanding.

    I think “very similar to a complete brain” is an overstatement as the brain also does some amazing things with vision, hearing, proprioception, touch, among other things. Human brains can analyze situations and take initiative, analyze things and understand how they work and apply that to their repair, improvement, duplication, etc. We can understand and solve problems, and so on. In other words I don’t think you’re giving the brain anywhere near enough credit. We aren’t just Q&A machines.

    We also have to be careful of the human tendency to anthropomorphize.

    I’m curious to look into vector databases and their applications here. Addition of what amounts to memory, or like extended context, sounds extremely interesting.

    Interesting to ponder what the world would be like with AGI taking over the jobs of most knowledge workers, artists, and so on. (I wonder if someone could create a CEO replacement…)

    What does it mean for a capitalist society with masses of people permanently unemployed? How does the economy work when nobody can afford to buy anything because they’re unemployed? Does this create widespread poverty and collapse or a post-scarcity economy in some sectors?

    Until robots mechanically evolve to Asimov’s vision, at least, manual labor is safe. Truly being able to replace a human body with a robot is still a ways off due to lack of progress on several fronts.