Can (And Should) Self-Driving Cars Learn and Adapt?

self-driving-carGuest post by Charles Bell (@csbell88)

Slowly but surely, smart devices are becoming a part of our everyday lives, and there’s a great deal of debate about it. People wonder how far it will go, whether or not it’s truly good for us, and what risks or benefits may not have become fully apparent yet. Some of these topics were covered in a recent article about the IoT and machine learning, as related to home functions and everyday applications. But another area in which machine learning is becoming a fascinating topic is where autonomous cars are concerned. 

With news constantly coming out about Google, Tesla, and other high-end tech companies testing self-driving technology, it’s begun to seem as if we’re very close to these types of vehicles populating our roads. However, it’s likelier that we’re closer to five or ten years away. This is partially because there will need to be official regulations in place for these cars to be allowed to drive, and the development of these regulations will likely be a drawn-out process. More significantly, it’s because there are a number of tricky final touches that have to be worked out for self-driving cars to be considered safe—and a lot of them have to do with machine learning, or the intuition of autonomous systems.

These are some of the questions that remain regarding how a smart driving system will ultimately be able to learn and perceive new situations.

Can They Do It With No Human Input?

Using the example of self-driving car systems utilizing sensors to draw “boundary boxes” around objects (like other cars), one article made an interesting point about “human-in-the-loop” deep learning. The idea is that sensors essentially reach out to surrounding objects and estimate their respective sizes, and then they draw boundary boxes that cannot be penetrated. Simply put, it’s a way of avoiding collisions. However, deep learning comes into effect when these boxes are initially drawn improperly but later corrected.

The question is, who does the correcting? Right now the idea is that humans can look at boundary boxes and send feedback, updating the deep learning model with corrections each week. Checking each and every boundary box is impractical, so to narrow them down, a setup could be arranged in which humans check only a certain percentage of boxes that are drawn with the lowest levels of confidence. In theory, this should work rather well. With human input, self-driving machinery can gradually learn to perfect its recognition of surrounding obstacles. However, in the sense that we imagine perfected autonomous vehicles, it still feels like a clunky way to do things. The hope is that systems can eventually learn to recognize their own mistakes and adjust on the go.

Can Systems Recognize Human Gestures?

Aside from recognizing surrounding objects and forming boundaries around them, self-driving cars will also need to recognize the actions—and perhaps even the intended actions—of other drivers. In large part, this is done simply by sensing vehicle movement, and in some more advanced cases a self-driving car may even be able to pick up on driving tendencies to predict the behavior of surrounding vehicles on the road.

As of now, however, automated systems cannot account for the complexity of human behavior. Think about the non-verbal communication that happens so frequently on the road. We wave to signal that other drivers can go ahead, we nod at each before changing lanes, etc. You may even drive more cautiously if you witness another driver having an argument or looking at his or her phone. These are signals and gestures that self-driving vehicles are currently inadequately equipped to notice, and it will be fascinating to see if they can learn on the go to become familiar with gestures or even expressions.

Are These Systems Fully Utilitarian?

Finally, there’s a potential challenge for self-driving vehicles that falls more in line with typical concerns about artificial intelligence. Teaching these vehicles how to assess dangerous situations in which lives are at stake is already proving to be a challenge. The general idea is to make self-driving systems utilitarian, such that they can recognize which situation saves the most lives. So, for instance, if it’s a choice between swerving into a single pedestrian or running a full car off the road, the machine will theoretically choose the former.

The trouble is that a system that’s designed to be fully utilitarian could make a strikingly inhuman decision that results in self-sacrifice when an ordinary driver would never make a similar decision. One example, per the previously linked article, is that a truck may be out of control and heading for a group of pedestrians. If you’re in a self-driving car, it might calculate that by swerving in front to intercept the truck—likely killing you in the process—it will minimize the total number of casualties. So the question is whether a car can or should be taught to evaluate different situations in different ways, or whether it truly ought to make these decisions purely based on odds and injury or death potential.
These are just some of the issues that still stand in the way of self-driving vehicles hitting the road. In some cases, it just comes down to how systems are initially designed, and what decisions are made about how these vehicles should perform. But there are also unresolved conflicts related to how the systems will learn and change once they’re on the road.

Charles Bell is a Los Angeles freelance writer who loves covering anything related to technology. Most recently, he’s had a particular interest in automated cars. He also probably needs to tweet more

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.