I came across this thread which lays out pretty clearly (with .gifs!) an argument against self-driving cars. Put simply:
will that car pull out? does the driver see me? is that pedestrian going to cross? is it a child or dog, prone to sudden, poorly planned moves? are they waiting for that spot? did they wave me through? are they on the phone? the ballet of shared intention is VAST…
and it’s unteachable to a machine. we have NO IDEA how we do it. you know how to drive and where to be because you can model what other people are going to do. if each were just a cube in a game, you’d have no idea. all that info would be gone. that’s FSD.
But let’s say that post is incorrect, and we’re able to get self-driving cars on par with human driven cars. There’s still another problem: we have no idea why the car makes the decision it does. When people screw up, we can usually understand how people made the mistake. But a machine-learning system? Consider what happened when a computer took on the best Go player in the world:
Lee Sedol had seen all the tricks. He knew all the moves. As one of the world’s best and most experienced players of the complex board game Go, it was difficult to surprise him. But halfway through his first match against AlphaGo, the artificially intelligent player developed by Google DeepMind, Lee was already flabbergasted.
AlphaGo’s moves throughout the competition, which it won earlier this month, four games to one, weren’t just notable for their effectiveness. The AI also came up with entirely new ways of approaching a game that originated in China two or three millennia ago and has been played obsessively since then. By their fourth game, even Lee was thinking differently about Go and its deceptively simple grid.
The AlphaGo-Lee Sedol matchup was an intense contest between human and artificial intelligence. But it also contained several moves made by both man and machine that were outlandish, brilliant, creative, foolish, and even beautiful.
(there’s a somewhat technical explainer here).
A real problem will be that when/if self-driving cars get good enough to drive in complex situtation, but still need improvement, understanding why they make the mistakes they do will be really hard to interpret. Obviously, we can determine when a mistake happens (a collision). But the ‘thought process’ might be utterly alien and incomprehensible to us–and even then we will likely interpret it through a human prism. That makes designing safer streets much harder.
Aside: It goes without saying fewer and slower cars is safer. But to the extent we can prevent collisions by altering either road design or human/machine behavior, that’s something we should do too.