this post was submitted on 18 Oct 2024
783 points (98.4% liked)

Technology

59092 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The U.S. government’s road safety agency is again investigating Tesla’s “Full Self-Driving” system, this time after getting reports of crashes in low-visibility conditions, including one that killed a pedestrian.

The National Highway Traffic Safety Administration says in documents that it opened the probe on Thursday with the company reporting four crashes after Teslas entered areas of low visibility, including sun glare, fog and airborne dust.

In addition to the pedestrian’s death, another crash involved an injury, the agency said.

Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions, and if so, the contributing circumstances for these crashes.”

you are viewing a single comment's thread
view the rest of the comments
[–] elgordino@fedia.io 24 points 2 weeks ago (1 children)

If anyone was somehow still thinking RoboTaxi is ever going to be a thing. Then no, it’s not, because of reasons like this.

[–] testfactor@lemmy.world 26 points 2 weeks ago (6 children)

It doesn't have to not hit pedestrians. It just has to hit less pedestrians than the average human driver.

[–] ContrarianTrail@lemm.ee 21 points 2 weeks ago (4 children)

Exactly. The current rate is 80 deaths per day in the US alone. Even if we had self-driving cars proven to be 10 times safer than human drivers, we’d still see 8 news articles a day about people dying because of them. Taking this as 'proof' that they’re not safe is setting an impossible standard and effectively advocating for 30,000 yearly deaths, as if it’s somehow better to be killed by a human than by a robot.

[–] pennomi@lemmy.world 9 points 2 weeks ago (1 children)

If you get killed by a robot, it simply lacks the human touch.

[–] billiam0202@lemmy.world 7 points 2 weeks ago (3 children)

If you get killed by a robot, you can at least die knowing your death was the logical option and not a result of drunk driving, road rage, poor vehicle maintenance, panic, or any other of the dozens of ways humans are bad at decision-making.

[–] sugar_in_your_tea@sh.itjust.works 7 points 2 weeks ago* (last edited 2 weeks ago)

It doesn't even need to be logical, just statistically reasonable. You're literally a statistic anytime you interact w/ any form of AI.

[–] candybrie@lemmy.world 4 points 2 weeks ago

Or the result of cost cutting...

[–] FiskFisk33@startrek.website 2 points 2 weeks ago

or a flipped comparison operator, or a "//TODO test code please remove"

[–] ano_ba_to@sopuli.xyz 1 points 2 weeks ago* (last edited 2 weeks ago)

"10 times safer than human drivers", (except during specific visually difficult conditions which we knowingly can prevent but won't because it's 10 times safer than human drivers). In software, if we have replicable conditions that cause the program to fail, we fix those, even though the bug probably won't kill anyone.

[–] III@lemmy.world 1 points 2 weeks ago

The problem with this way of thinking is that there are solutions to eliminate accidents even without eliminating self-driving cars. By dismissing the concern you are saying nothing more than it isn't worth exploring the kinds of improvements that will save lives.

[–] drmoose@lemmy.world 0 points 2 weeks ago

But they aren't and likely never will be.

And how are we to correct for lack of safety then? With human drivers you obvious discourage dangerous driving through punishment. Who do you punish in a self driving car?

[–] elgordino@fedia.io 13 points 2 weeks ago (1 children)

It needs to be way way better than ‘better than average’ if it’s ever going to be accepted by regulators and the public. Without better sensors I don’t believe it will ever make it. Waymo had the right idea here if you ask me.

[–] sugar_in_your_tea@sh.itjust.works 1 points 2 weeks ago (2 children)

But why is that the standard? Shouldn't "equivalent to average" be the standard? Because if self-driving cars can be at least as safe as a human, they can be improved to be much safer, whereas humans won't improve.

[–] medgremlin@midwest.social 3 points 2 weeks ago (1 children)

I'd accept that if the makers of the self-driving cars can be tried for vehicular manslaughter the same way a human would be. Humans carry civil and criminal liability, and at the moment, the companies that produce these things only have nominal civil liability. If Musk can go to prison for his self-driving cars killing people the same way a regular driver would, I'd be willing to lower the standard.

[–] sugar_in_your_tea@sh.itjust.works 6 points 2 weeks ago (1 children)

Sure, but humans are only criminally liable if they fail the "reasonable person" standard (i.e. a "reasonable person" would have swerved out of the way, but you were distracted, therefore criminal negligence). So the court would need to prove that the makers of the self-driving system failed the "reasonable person" standard (i.e. a "reasonable person" would have done more testing in more scenarios before selling this product).

So yeah, I agree that we should make certain positions within companies criminally liable for criminal actions, including negligence.

[–] medgremlin@midwest.social 4 points 2 weeks ago (1 children)

I think the threshold for proving the "reasonable person" standard for companies should be extremely low. They are a complex organization that is supposed to have internal checks and reviews, so it should be very difficult for them to squirm out of liability. The C-suite should be first on the list for criminal liability so that they have a vested interest in ensuring that their products are actually safe.

[–] sugar_in_your_tea@sh.itjust.works 3 points 2 weeks ago (1 children)

Sure, the "reasonable person" would be a competitor who generally follows standard operating procedures. If they're lagging behind the industry in safety or something, that's evidence of criminal negligence.

And yes, the C-suite should absolutely be the first to look at, but the problem could very well come from someone in the middle trying to make their department look better than it is and lying to the C-suites. C-suites have a fiduciary responsibility to the shareholders, whereas their reports don't, so they can have very different motivations.

[–] medgremlin@midwest.social 1 points 2 weeks ago (1 children)

The c-suites have the ultimate power and therefore ultimate responsibility for whatever happens in their organization. Similar to how parents can be held criminally liable for their children's actions. It's just that much more incentive for them to make sure things are in order in their organization.

Also, Citizen's United ruled that corporations are people, so they can be held to the same standards of responsibility as other people.

[–] sugar_in_your_tea@sh.itjust.works 1 points 2 weeks ago (1 children)

parents can be held criminally liable for their children’s actions

It's a pretty limited liability, as can be seen in a lot of incidents (e.g. mass shootings). You have to prove a pretty high standard of negligence for a parent to be responsible for their kids' actions.

The same should be true for anyone else as well, if the C-suite is unaware that something negligent or illegal was going on two levels below, they shouldn't be held liable for that. You should only be liable for a crime that you are aware of, or should have been aware of if you were doing your due diligence. But yes, in many cases, the C-suites should be held to task here.

[–] medgremlin@midwest.social 1 points 2 weeks ago (1 children)

That's the thing though...I think it is part of their due diligence to know what's going on in their own business. If they can't guarantee that it's safe, they shouldn't release it.

If their reports are lying to them, and not in a way that they should have detected, then I don't think it falls on them if things go sideways. And that happens somewhat regularly, when you have a VP or something somewhere painting a much rosier picture that what is actually happening on the ground.

At that point, it comes down to a call on whether they were negligent.

[–] drmoose@lemmy.world -1 points 2 weeks ago (1 children)

Not on average as drivers, safety protections in do improve though.

[–] leftytighty@slrpnk.net 7 points 2 weeks ago

The average human driver is tried and held accountable

[–] spankmonkey@lemmy.world 5 points 2 weeks ago* (last edited 2 weeks ago)

That is the minimal outcomes for an automated safety feature to be an improvement over human drivers.

But if everyone else is using something you refused to that would have likely avoided someone's death, while misnaming you feature to mislead customers, then you are in legal trouble.

When it comes to automation you need to be far better than humans because there will be a higher level of scrutiny. Kind of like how planes are massively safer than driving on average, but any incident where someone could have died gets a massive amount of attention.

[–] dmention7@lemm.ee 2 points 2 weeks ago* (last edited 2 weeks ago)

It's bit reductive to put it in terms of a binary choice between an average human driver and full AI driver. I'd argue it has to hit less pedestrians than a human driver with the full suite of driver assists currently available to be viable.

Self-driving is purely a convenience factor for personal vehicles and purely an economic factor for taxis and other commercial use. If a human driver assisted by all of the sensing and AI tools available is the safest option, that should be the de facto standard.

[–] helenslunch@feddit.nl 2 points 2 weeks ago

It does, actually. That's why robotaxis and self-driving cars in general will never be a thing.

Society accepts that humans make mistakes, regardless of how careless they're being at the time. Autonomous vehicles are not allowed the same latitude. A single pedestrian gets killed and we have to get them all off the road.