• theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    141
    arrow-down
    1
    ·
    1 year ago

    “I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well”

    For how long will we be forced to endure different versions of the same article?

    The study said 86.66% of the generated software systems were “executed flawlessly.”

    Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it “execute flawlessly.” Whether or not code executes it not the bar by which we judge it.

    Even if it were to hit 100%, which it does not, there’s so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.

    LLMs are not capable of this kind of meaningful collaboration, despite all this hype.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I think there is less of a conspiracy, and it’s just pushing investment. These AI articles sound exactly like when the internet was new and most people only had a cursory experience with it and people were pumping any company if they just said the word internet.

        Now that “Blockchain” has been beaten to death, they need a new hype word to drive mindless investment.

    • PlexSheep@feddit.de
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      4
      ·
      edit-2
      1 year ago

      Thank you for writing this so I only have to upvore upvote you.

      Edit: What the difference between one key can be

        • NoRodent@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Is it… vore but… upwards? So… vomiting people? Nah, I don’t want to know either.

          • Bleeping Lobster@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            What’s up, vore!

            AFAIK vore is a rare fetish where someone gains sexual gratification from imagining swallowing someone whole (or imagining themselves being swallowed whole). Like the Bilquis scenes from American Gods, which I found oddly arousing.

            Oh fuck.

            • RiikkaTheIcePrincess@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              Well, there are different kinds. Not all involve swallowing a critter whole, not all involve death, not all involve, er, mouths.

              Hey wait, where’s everyone going? Oh well, more vore for me 🤣Guess I should go check out American Gods. … And look for a particular kind of place to hang out 🤔

              • Bleeping Lobster@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                It’s not for everyone, but I loved it and was saddened that the show got cancelled. It’s very surreal in places, the settings switch from standard middle America to jaw-droppingly-stunning god realm stuff.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      80% accuracy, that is trash

      More than 80% of most codebases is boilerplate stuff: including the right files for dependencies, declaring functions with the right number of parameters using the right syntax, handling basic easily anticipated errors, etc. Sometimes there’s even more boilerplate, like when you’re iterating over a list, or waiting for input and handling it.

      The rest of the stuff is why programming is a highly paid job. Even a junior developer is going to be much better than an LLM at this stuff because at least they understand it’s hard, and at least often know when they should ask for help because they’re in over their heads. An LLM will “confidently” just spew out plausible bullshit and declare the job done.

      Because an LLM won’t ask for help, won’t ask for clarifications, and can’t understand that it might have made a mistake, you’re going to need your highly paid programmers to go in and figure out what the LLM did and why it’s wrong.

      Even perfecting self-driving is going to be easier than a truly complex software engineering project. At least with self-driving, the constraints are going to be limited because you’re dealing with the real world. The job is also always the same – navigate from A to B. In the software world you’re only limited by the limits of math, and math isn’t very limiting.

      I have no doubt that LLMs and generative AI will change the job of being a software engineer / programmer. But, fundamentally programming comes down to actually understanding the problem, and while LLMs can pretend they understand things, they’re really just like well-trained parrots who know what sounds to make in specific situations, but with no actual understanding behind it.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      1 year ago

      LLMs are not capable of this kind of meaningful collaboration

      Which is why they’re a tool for professionals to amplify their workload, not a replacement for them.

      • CmdrShepard@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        But C-suites will read articles like this and fire their development teams “because AI can do it.” I have my popcorn ready for the day it begins.