A 2025 Tesla Model 3 in Full-Self Driving mode drives off of a rural road, clips a tree, loses a tire, flips over, and comes to rest on its roof. Luckily, the driver is alive and well, able to post about it on social media.

I just don’t see how this technology could possibly be ready to power an autonomous taxi service by the end of next week.

  • rabber@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    30 minutes ago

    Elon took the wheel because that person made a mean tweet about him

  • FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    13
    ·
    5 hours ago

    Why was the driver not paying attention and why didn’t they just steer it back into the lane? It’s not called “Full Self Driving (Supervised)” for no reason. Hopefully Tesla get all the telemetry and share why it didn’t stay on the road, and also check if the driver was sleeping or distracted.

    • rabber@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      31 minutes ago

      Watch the video. Happens insanely quickly. And on straight road that should be no issue so person’s guard was down

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    18 hours ago

    The car made a fatal decision faster than any human could possibly correct it. Tesla’s idea that drivers can “supervise” these systems is, at this point, nothing more than a legal loophole.

    What I don’t get is how this false advertising for years hasn’t caused Tesla bankruptcy already?

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 hours ago

        For many years the “supervised” was not included, AFAIK Tesla was forced to do that.
        And in this case “supervised” isn’t even enough, because the car made an abrupt unexpected maneuver, instead of asking the driver to take over in time to react.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      15 hours ago

      Because the US is an insane country where you can straight up just break the law and as long as you’re rich enough you don’t even get a slap on the wrist. If some small startup had done the same thing they’d have been shut down.

      What I don’t get is why teslas aren’t banned all over the world for being so fundamentally unsafe.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        5 hours ago

        What I don’t get is why teslas aren’t banned all over the world for being so fundamentally unsafe.

        I’ve argued this point the past year, there are obvious safety problems with Tesla, even without considering FSD.
        Like blinker on the steering wheel, manual door handles that are hard to find in emergencies, and distractions from common operations being behind menus on the screen, instead of having directly accessible buttons. With auto pilot they also tend to break for no reason, even on autobahn with clear road ahead! Which can also create dangerous situations.

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        16 hours ago

        To put your number into perspective, if it only failed 1 time in every hundred miles, it would kill you multiple times a week with the average commute distance.

        • KayLeadfoot@fedia.ioOP
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          15 hours ago

          Someone who doesn’t understand math downvoted you. This is the right framework to understand autonomy, the failure rate needs to be astonishingly low for the product to have any non-negative value. So far, Tesla has not demonstrated non-negative value in a credible way.

          • bluewing@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            55 seconds ago

            You are trying to judge the self driving feature in a vacuum. And you can’t do that. You need to compare it to any alternatives. And for automotive travel, the alternative to FSD is to continue to have everyone drive manually. Turns out, most clowns doing that are statistically worse at it than even FDS, (as bad as it is). So, FSD doesn’t need to be perfect-- it just needs to be a bit better than what the average driver can do driving manually. And the last time I saw anything about that, FSD was that “bit better” than you statistically.

            FSD isn’t perfect. No such system will ever be perfect. But, the goal isn’t perfect, it just needs to be better than you.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          11 hours ago

          …It absolutely fails miserably fairly often and would likely crash that frequently without human intervention, though. Not to the extent here, where there isn’t even time for human intervention, but I frequently had to take over when I used to use it (post v13)

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          15 hours ago

          Even with the distances I drive and I barely drive my car anywhere since covid, I’d probably only last about a month before the damn thing killed me.

          Even ignoring fatalities and injuries, I would still have to deal with the fact that my car randomly wrecked itself, which has to be a financial headache.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        15 hours ago

        That’s probably not the failure rate odds but a 1% failure rate is several thousand times higher than what NASA would consider an abort risk condition.

        Let’s say that it’s only 0.01% risk, that’s still several thousand crashes per year. Even if we could guarantee that all of them would be non-fatal and would not involve any bystanders such as pedestrians the cost of replacing all of those vehicles every time they crashed plus fixing damage of things they crashed into, lamp posts, shop Windows etc would be so high as it would exceed any benefit to the technology.

        It wouldn’t be as bad if this was prototype technology that was constantly improving, but Tesla has made it very clear they’re never going to add lidar scanners so is literally never going to get any better it’s always going to be this bad.

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          4 hours ago

          Saying it’s never going to get better is ridiculous and demonstrably wrong. It has improved in leaps and bounds over generations. It doesn’t need LiDAR.

          The biggest thing you’re missing if that with FSD **the driver is still supposed to be paying attention at all times, ready to take over like a driving instructor does when a learner is doing something dangerous. Just because it’s in FSD Supervised mode it slant mean you should just sit back and watch it drive you off the road into a lake.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            23 minutes ago

            Your saying this on a video where it drove into a tree and flipped over. There isn’t time for a human to react, that’s like saying we don’t need emergency stops on chainsaws, the operator needs to just not drop it.

        • KayLeadfoot@fedia.ioOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          15 hours ago

          …is literally never going to get any better it’s always going to be this bad.

          Hey now! That’s unfair. It is constantly changing. Software updates introduce new reversions all the time. So it will be this bad, or significantly worse, and you won’t know which until it tries to kill you in new and unexpected ways :j

    • LePoisson@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      15 hours ago

      It’s fine, nothing at all wrong with using just camera vision for autonomous driving. Nothing wrong at all. So a few cars run off roads or don’t stop for pedestrians or drive off a cliff. So freaking what, that’s the price for progress my friend!

      I’d like to think this is unnecessary but just in case here’s a /s for y’all.

    • KayLeadfoot@fedia.ioOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      15 hours ago

      GPS data predicted the road would go straight as far as the horizon. Camera said the tree or shadow was an unexpected 90 degree bend in the road. So the only rational move was to turn 90 degrees, obviously! No notes, no whammies, flawless

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      Took me a second to get it, but that’s brilliant.
      I wonder if there might even be some truth to it?

      • GSV_Sleeper_Service@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 hours ago

        Wonder no more. Someone did this on YouTube using cardboard boxes, Tesla drove straight through them. Skip to around the 15 minute mark to watch it drive through the “wall” without even touching the brakes.

        Edit: thought the person you were replying to said it thought a wall was a tunnel, not the other way round. Still funny to watch it breeze through a wall with a tunnel painted on it though.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Yes I know the video, what I was wondering is if it could be true that they tried to make the AI detect a wall with a road painted on it, and it falsely believed there was a wall, and made an evasive maneuver to avoid it.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    I have visions of Elon sitting in his lair, stroking his cat, and using his laptop to cause this crash. /s

  • orca@orcas.enjoying.yachts
    link
    fedilink
    English
    arrow-up
    92
    arrow-down
    2
    ·
    1 day ago

    The worst part is that this problem has already been solved by using LIDAR. Vegas had fully self-driving cars that I saw perform flawlessly, because they were manufactured by a company that doesn’t skimp on tech and rip people off.

    • FreedomAdvocate@lemmy.net.au
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      4 hours ago

      Lidar doesn’t completely solve the issue lol. Lidar can’t see line markings, speed signs, pedestrian crossings, etc. Cars equipped with lidar crash into things too.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        English
        arrow-up
        1
        ·
        51 minutes ago

        I oversold it in my original comment, but it still performs better than using regular cameras like Tesla did. It performs better in weather and other scenarios than standard cameras. Elon is dumb though and doesn’t think LiDAR is needed for self-driving.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      17 hours ago

      I wouldn’t really called it a solved problem when waymo with lidar is crashing into physical objects

      https://www.msn.com/en-us/autos/news/waymo-recalls-1200-robotaxis-after-cars-crash-into-chains-gates-and-utility-poles/ar-AA1EMVTF

      NHTSA stated that the crashes “involved collisions with clearly visible objects that a competent driver would be expected to avoid.” The agency is continuing its investigation.

      It’d probably be better to say that Lidar is the path to solving these problems, or a tool that can help solve it. But not solved.

      Just because you see a car working perfectly, doesn’t mean it always is working perfectly.

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        A human also (hopefully anyway) wouldn’t drive if you put a cone over their head.

        Like yeah, if you purposely block the car’s vision, it should refuse to drive.

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 hours ago

        The same is true when you put a cone in front of a human driver’s vision. I don’t understand why “haha I blocked the vision of a driver and they stopped driving” is a gotcha.

      • KayLeadfoot@fedia.ioOP
        link
        fedilink
        arrow-up
        23
        ·
        1 day ago

        Probably Zoox, but conceptually similar, LiDAR backed.

        You can immobilize them by setting anything large on them. Your purse, a traffic cone, a person :)

        Probably makes sense to be a little cautious with the gas pedal when there is an anything on top the vehicle.

        • SynopsisTantilize@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          That and if you just put your toddler on the roof of the car or something or trunk for a quick second to grab something from your pocket…VROooOMMM baby gone.

  • itisileclerk@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    Why someone will be a passenger in self-driving vehicle? They know that they are a test subjects, part of a “Cartrial” (or whatever should be called)? Self-Driving is not reliable and not necessery. Too much money is invested in something that is “Low priority to have”. There are prefectly fast and saf self-driving solutions like High-speed Trains.

    • dan1101@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      I have no idea, I guess they have a lot more confidence in self driving (ESPECIALLY Tesla) than I do.

  • Skyrmir@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 day ago

    I use autopilot all the time on my boat. No way in hell I’d trust it in a car. They all occasionally get suicidal. Mine likes to lull you into a sense of false security, then take a sharp turn into a channel marker or cargo ship at the last second.

    • SynopsisTantilize@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      They have auto pilot on boats? I never even thought about that existing. Makes sense, just never heard of it until just now!

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 hours ago

        They’ve technically had autopilots for over a century, the first one was the oil tanker J.A Moffett in 1920. Though the main purpose of it is to keep the vessel going dead straight as otherwise wind and currents turn it, so using modern car terms I think it would be more accurate to say they have lane assist? Commercial ones can often do waypoint navigation, following a set route on a map, but I don’t think that’s very common on personal vessels.

    • dependencyinjection@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      Exactly. My car doesn’t have AP, but it does have a shed load of sensors and sometimes it just freaks out about stuff being too close to car for no discernible reason. Really freaks me out as I’m like what you see bro we just driving down the motorway.

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 hours ago

        For mine, it’s the radar seeing the retro-reflective stripes on utility poles being brighter than it expects.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Isn’t there a plane whose autopilot famously keeps trying to crash into the ground. The general advice is to just not let it do that, whenever it looks like it’s about to crash into the ground, pull up instead.

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        23 hours ago

        All the other answers here are wrong. It was the Boeing 737-Max.

        They fit bigger, more fuel efficient engines on it that changed the flight characteristics, compared to previous 737s. And so rather than have pilots recertify on this as a new model (lots of flight hours, can’t switch back), they designed software to basically make the aircraft seem to behave like the old model.

        And so a bug in the cheaper version of the software, combined with a faulty sensor, would cause the software to take over and try to override the pilots and dive downward instead of pulling up. Two crashes happened within 5 months, to aircraft that were pretty much brand new.

        It was grounded for a while as Boeing fixed the software and hardware issues, and, more importantly, updated all the training and reference materials for pilots so that they were aware of this basically secret setting that could kill everyone.

      • kameecoding@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        1 day ago

        The Being 787 Max did that when the sensor got faulty and there was no redundancy for the sensor’s because that was in an optional addon package

        • mbtrhcs@feddit.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          23 hours ago

          Even worse, the pilots and the airlines didn’t even know the sensor or associated software control existed and could do that.

      • Skyrmir@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 day ago

        Pretty sure that’s the Boeing 777 and they discovered that after a crash off Brazil.