view the rest of the comments
Fuck Cars
A place to discuss problems of car centric infrastructure or how it hurts us all. Let's explore the bad world of Cars!
Rules
1. Be Civil
You may not agree on ideas, but please do not be needlessly rude or insulting to other people in this community.
2. No hate speech
Don't discriminate or disparage people on the basis of sex, gender, race, ethnicity, nationality, religion, or sexuality.
3. Don't harass people
Don't follow people you disagree with into multiple threads or into PMs to insult, disparage, or otherwise attack them. And certainly don't doxx any non-public figures.
4. Stay on topic
This community is about cars, their externalities in society, car-dependency, and solutions to these.
5. No reposts
Do not repost content that has already been posted in this community.
Moderator discretion will be used to judge reports with regard to the above rules.
Posting Guidelines
In the absence of a flair system on lemmy yet, let’s try to make it easier to scan through posts by type in here by using tags:
- [meta] for discussions/suggestions about this community itself
- [article] for news articles
- [blog] for any blog-style content
- [video] for video resources
- [academic] for academic studies and sources
- [discussion] for text post questions, rants, and/or discussions
- [meme] for memes
- [image] for any non-meme images
- [misc] for anything that doesn’t fall cleanly into any of the other categories
I think the reason non-tech people find this so difficult to comprehend is the poor understanding of what problems are easy for (classically programmed) computers to solve versus ones that are hard.
To the layperson it makes sense that self-driving cars should be programmed this way. Aftter all, this is a trivial problem for a human to solve. Just look, and if there is a person you stop. Easy peasy.
But for a computer, how do you know? What is a 'person'? What is a 'crossing'? How do we know if the person is 'at/on' the crossing as opposed to simply near it or passing by?
To me it's this disconnect between the common understanding of computer capability and the reality that causes the misconception.
I think you could liken it to training a young driver who doesn’t share a language with you. You can demonstrate the behavior you want once or twice, but unless all of the observations demonstrate the behavior you want, you can’t say “yes, we specifically told it to do that”
Most walkways are marked. The vehicle is able to identify obstructions in the road and things on the side of the road that are moving towards the road just like cross street traffic.
If (thing) is crossing the street then stop. If (thing) is stationary near a marked crosswalk, stop and go if they don't move in (x) seconds. If they don't move in a reasonable amount of time, then go.
You know, the same way people are supposed to handle the same situation.
Most crosswalks in the US are not marked, and in all places I'm familiar with vehicles must stop or yield to pedestrians at unmarked crosswalks.
At unmarked crosswalks and marked but uncontrolled crosswalks we have to handle the situation with social cues about which direction the pedestrian wants to cross the street/road/highway and if they will feel safer crossing the road after a vehicle has passed than before (almost always for homeless pedestrians and frequently for pedestrians in moderate traffic).
If waymo can't figure out if something intends or is likely to enter the highway they can't drive a car. Those can be people at crosswalks, people crossing at places other than crosswalks, blind pedestrians crossing anywhere, deaf and blind pedestrians crossing even at controlled intersections, kids or wildlife or livestock running toward the road, etc.
Thing? Like a garbage bin? Or a sign?
Person, dog, cat, rolling cart, bicycle, etc.
If the car is smart enough to recognize a stationary atop sign then it should be able to ignore a permantly mounted crosswalk sign or indicator light at a crosswalk and exclude those from things that might move into the street. Or it could just stop and wait a couple seconds if it isn't sure.
A woman was killed by a self driving car because she walked her bicycle across the road. The car hadn't been programmed to understand what a person walking a bicycle is. Its AI switched between classifying her as a pedestrian, cyclist, and "unknown". It couldn't tell whether to slow down, and then it hit her. The engineers forgot to add a category, and someone died.
It shouldn't even matter what category things are when they are on the road. If anything larger than gravel is in the road the car should stop.
You can use that logic to say it would be difficult to do the right thing for all cases, but we can start with the ideal case.
Difference is that humans (usually) come with empathy (or at least self-preservation) built in. With self-driving cars we aren't building in empathy and self (or at least passenger) preservation, we're hard-coding in scenarios where the law says they have to do X or Y.