AI, Go, and Self-Driving Cars

I came across this thread which lays out pretty clearly (with .gifs!) an argument against self-driving cars. Put simply:

will that car pull out? does the driver see me? is that pedestrian going to cross? is it a child or dog, prone to sudden, poorly planned moves? are they waiting for that spot? did they wave me through? are they on the phone? the ballet of shared intention is VAST…

and it’s unteachable to a machine. we have NO IDEA how we do it. you know how to drive and where to be because you can model what other people are going to do. if each were just a cube in a game, you’d have no idea. all that info would be gone. that’s FSD.

But let’s say that post is incorrect, and we’re able to get self-driving cars on par with human driven cars. There’s still another problem: we have no idea why the car makes the decision it does. When people screw up, we can usually understand how people made the mistake. But a machine-learning system? Consider what happened when a computer took on the best Go player in the world:

Lee Sedol had seen all the tricks. He knew all the moves. As one of the world’s best and most experienced players of the complex board game Go, it was difficult to surprise him. But halfway through his first match against AlphaGo, the artificially intelligent player developed by Google DeepMind, Lee was already flabbergasted.

AlphaGo’s moves throughout the competition, which it won earlier this month, four games to one, weren’t just notable for their effectiveness. The AI also came up with entirely new ways of approaching a game that originated in China two or three millennia ago and has been played obsessively since then. By their fourth game, even Lee was thinking differently about Go and its deceptively simple grid.

The AlphaGo-Lee Sedol matchup was an intense contest between human and artificial intelligence. But it also contained several moves made by both man and machine that were outlandish, brilliant, creative, foolish, and even beautiful.

(there’s a somewhat technical explainer here).

A real problem will be that when/if self-driving cars get good enough to drive in complex situtation, but still need improvement, understanding why they make the mistakes they do will be really hard to interpret. Obviously, we can determine when a mistake happens (a collision). But the ‘thought process’ might be utterly alien and incomprehensible to us–and even then we will likely interpret it through a human prism. That makes designing safer streets much harder.

Aside: It goes without saying fewer and slower cars is safer. But to the extent we can prevent collisions by altering either road design or human/machine behavior, that’s something we should do too.

This entry was posted in Computers, Transportation. Bookmark the permalink.

7 Responses to AI, Go, and Self-Driving Cars

  1. Mark K says:

    This I think is a bad argument on both sides.

    First, how accurate will humans be in the first situation? Not very. The computer doesn’t have to be perfect, just as good as a human or a little better over all the situations like this weighted by how often they happen.

    Second that Alpha Go program was the first AI program that ever won a game against a pro. It was a cobbled together first try. A year later it beat 50 top pro’s convincingly, never losing. That is, it learned to be better, and was even taught using adversarial learning to find its weak points.
    So this may be an point for how AI’s can get better.

    People are so bad at controlling a car that tens of thousands die every year from it… How do we know why they did what they did? We don’t know the human algorithm, so how do we do that?
    We investigate the circumstance. We would do the same technique with modern AI’s except we can zero in once we have an idea using virtual tools so nobody is hurt checking.

    I am not saying AI’s would be perfect – or close to perfect – or halfway to perfect. Again they only have to be a little bit better than the crappy set of human drivers to save a bunch of lives.

  2. Scottie says:

    Hello Mike. I am a believer in self driving autos. As a disabled person who couldn’t drive for a few years I would love to have had one. I live in an area with a lot of seniors who shouldn’t be driving and who have to do so. Also think of the ending of driving under the the influence of drugs or alcohol. Last year we bought a new car with all the safety electronic features. They are great, it does so many things. Everything from lane detection and warning if you start to drift to the warning alerts / auto braking if needed. Shortly after we got the car I was driving in the left lane of a double lane road. A car to my right suddenly came across the road cutting in front of me to reach a left hand turning lane. Before I could react the car did. Warning lights and sounds with the car auto braking to avoid a collision. The car took actions before I could react. It warns if someone is coming too fast up to you also. One last thing I think every auto should have is the car / phone connection that lets you use all the functions of your phone with out ever touching the phone. No more people weaving all over the road due to trying to text while driving. The car reads the texts and does the replies. Want to hear your podcasts, the car find them and plays them for you. So I think we are very close to automatic autos. Hugs

    • Bayesian Bouffant, FCD says:

      One last thing I think every auto should have is the car / phone connection that lets you use all the functions of your phone with out ever touching the phone. No more people weaving all over the road due to trying to text while driving.

      The studies on distracted driving have been clear from the very start: it is not the taking the hands and eyes off the road that is the issue, it is the mental distraction. This was clear even before various states and cities started passing ‘hands-free’ phone laws.

      • Scottie says:

        Hello Bayesian Bouffant. Regardless it is hard to stay in your lane, react to lights, and other drivers when you are not looking at the road. I can tell you I have seen people hit curbs, drift lanes, and drop / gain speed and they have their phones in their hands texting. I am not saying that other things can not distract a driver such as a child or fiddling with the radio, but for extended times of erratic driving it is almost always texting. Or being under the influence. Hugs

  3. Bayesian Bouffant, FCD says:

    The counter-argument: St. Louis.
    If you drive in big cities like New York or Los Angeles, you can usually do a pretty good job of predicting the bad driving you will encounter: That car will cut me off. That car will turn right-on-red without making a complete stop. The drivers will be aggressive and selfish. And therefore predictable.
    My experience with St. Louis is different. Frequently, when encountering a piece of bad driving, I would find myself wondering, why on earth would anyone do that? For example; I was driving on the left-hand lane on a multi-lane street, and as a car came up behind me I signaled for a lane change before shifting to the right. Sow what happened? The driver coming up behind me changed lanes first and passed me on the right. Why? It just doesn’t make any sense.

  4. jmagoun says:

    With every article I read, I wonder more and more why anyone thinks self-driving cars are a good idea, worth spending billions on both for the cars, and for the street infrastructure and insurance schemes that will be a response to them as their flaws become more apparent and less inherently fixable. Scottie’s notes about the uses of safety features that help a driver who is actively driving seem more like the way to go.

  5. dr2chase says:

    Was going to rant differently, but it’s interesting to note that the response to self-driving cars unable to deal with drivers’ illegal box-blocking is “welp, they’ll just not work”, whereas if the problem is pedestrians, it’s “fences and pens for the rabble!”

Comments are closed.