Should Self-Driving Cars Be Banned?

Articles, Uncategorized

by senior futurist Richard Worzel, C.F.A.

After dark, on March 18th, 2018, a pedestrian was killed by an experimental, self-driving car owned by Uber as the pedestrian crossed a street in Tempe, Arizona. The month before, a driver was killed in a crash in a Tesla on its Autopilot setting in Mountain View, California. And in May, 2016, a man died because a Tesla, operating on Autopilot, crashed into a stopped truck in Florida.

There have been a handful of other non-fatal collisions involving self-driving cars (also called autonomous vehicles, or AVs), in most of which authorities have cleared the car of responsibility.

The companies testing self-driving cars say that in most cases, it’s more a matter of human error – driver or pedestrian – than machine error, and that AVs are actually safer in most circumstances. On the other side, some commentators are saying that if you look at the number of miles driven per collision, then AVs are clearly less safe than human drivers, and should be banned at least until they’re as safe or better than humans.

So, which is it: Are self-driving cars more or less safe than human drivers? I think the answer is: both, and for a number of different reasons.

First, let’s take the case of the Tesla crashes. The Autopilot feature is described by Tesla as a “driver assistance tool”, somewhat akin to the cruise control feature that has been around for decades, but more advanced. In such cases the car isn’t supposed to drive itself. Instead, the human is supposed to be in charge, with the car merely trying to help him drive more safely. And in the two cases I’ve cited, the human was ignoring his responsibilities.

The Level 3 Threat

But this illustrates one of the critical problems of self-driving and autonomous vehicles. To explain this more clearly, I want to back up and talk about the levels of vehicle autonomy, as defined by the National Highway Traffic Safety Board (NHTSB)[1]:

Level 0– Vehicle has no automatic or autonomous controls, like a Model T Ford.

Level 1– Vehicle has one control function, like the cruise control found on virtually any car sold in the last 30 years or more.

Level 2– Vehicle has two or more control functions that work together to assist a driver, such as adaptive cruise control and lane centering.

Level 3– Vehicle has integrated control systems that can drive the car autonomously under certain highway conditions, but under human supervision. This might, theoretically, allow the human to read, text, or talk on the phone, for instance, as long as he’s maintaining some awareness of what’s happening on the road. The car can drive itself under most circumstances, but the driver has ultimate responsibility, and must be available at any time to take over.

Level 4– Vehicle is completely autonomous, and can drive itself while the occupants read, sleep, or engage in other activities (use your imagination). Some prototype Level 4 vehicles, such as those being tested by Google, have no human controls, such as a steering wheel or petals.

Of these five levels of autonomy, Level 3 (L3) is, in my opinion, the most dangerous, and for exactly the reasons illustrated in the fatal crashes experienced with Tesla cars so far: humans are supposed to supervise, but don’t. In at least two of such crashes, the car repeatedly tried to warn the driver to take control of the vehicle, but the warnings were ignored.

But even if a human were conscientiously supervising the operation of the vehicle, I still think this is the most dangerous level of autonomy because it is so easy to get bored, and stop paying attention.

The vast majority of the time in an L3 car under autonomous control there will be little or nothing for the driver to do. Over time, this becomes boring, and the driver’s attention drifts from the road. On a L2 car or lower, such inattention can’t last for very long, and the driver either snaps back into paying attention to the road, or crashes. We’re all used to that, and (mostly) pay attention (unless we’re eating, texting, talking, or applying make-up).

But in an L3 car, our inattention can last for some time with no apparent consequences. This will tend to lull the driver into a false sense of security.

Then, if some crisis develops, even if the driver isn’t focusing on something else but is just bored, it will take her some time to become aware that her attention is required, grasp the situation, decide on what action is necessary, and then act. This can take several seconds – typically too long to avoid a collision. So, even with an attentive driver, the boredom of being responsible for driving, but not actually doing the driving is, in my opinion, the biggest danger with self-driving cars.

Should Full AVs Be Banned?

I believe L4 vehicles are actually safer than L3, primarily because they are designed specifically to operate without human intervention. The designers know they’re on the hook, and can’t cop out by flipping control back to a supposedly-attentive human. So, if they put an L4 vehicle on the road, it’s because they’ve done everything they can think of to make it safe.

But does that mean it is safe? Not yet.

I would suggest that a properly designed and tested L4 vehicle is probably safer than a human driver in most circumstances, and under most road conditions. And, of course, there are variations between different L4 vehicles: some manufacturers undoubtedly make safer vehicles than others.

Unfortunately, at this point, the industry is largely self-regulating, which is a problem. Human drivers need to pass examinations, both written and practical, before they are allowed to drive. Why shouldn’t there be independent standards for autonomous vehicles (AVs)? This is, I think, the first step we should take to make AVs safe.

However, let’s circle back to my emphasis on “most”. An L4 vehicle is probably safer than a human driver in most circumstances, and under most conditions. We know that AVs need clearly defined highway lanes to function, for instance, and don’t do very well in heavy snow, rain, or fog. Of course, such conditions are difficult for humans as well – but human beings have evolved to be good at ambiguous situations. It’s a survival trait.

Perhaps, then, a next step is for AVs to sound an alarm, or issue a warning that the current conditions are not safe for it to operate, and that it must either turn control back to a human driver, or pull over and stop. (Of course, there are times, such as driving on a superhighway in a heavy downpour, when stopping can be dangerous, too.) AVs should be designed to be able to detect and acknowledge when it’s not safe for them to operate.

But just equaling the abilities of a good human driver shouldn’t be the end-goal for AVs. They should aim to become far superior to any human – and technology can and should contribute to that.

The Future of Self-Driving Cars

One technological development of the past that caught me by surprise was the mapping of the drivable world by Google, Garmin, and others. It wasn’t that the technology was difficult, it was just that it required that every road, and every intersection be mapped, cataloged, and placed in available memory.

So, the next step for AVs is to document current road conditions everywhere they go, and to store and share that information in the Cloud, or with a Fog computer network so that all other AVs will be warned of dangerous and difficult situations. Hence, if there’s a particularly deep pothole, an AV should know about it before reaching that stretch of road. If a section of road is icy, and traction is difficult and conditions dangerous, an AV approaching that stretch of road should know it, and either slow down, detour to a safer route, or refuse to proceed at all. If there’s a collision that blocks a given stretch of superhighway, your AV would be warned of it happening, and figure the optimal alternative route to travel.

All of this would make self-driving cars safer, and travel more convenient. And all of it is merely an extension of technologies that are available today, without any new breakthroughs, although the volume of data required would be immense.

But it shouldn’t stop with road conditions. AVs should assess their own roadworthiness as well.

The Self-Aware Vehicle

An AV could, and, in my opinion, should keep track of how well it is operating, and when deviations in how the vehicle is performing, it should check against the wear profiles of its own components to develop a real-time safety diagnostic of its own operation. Hence, when brake shoes start to approach unsafe wear conditions, the car should warn the owner (and perhaps the place that services the car) of this development. The same is true of any other working part – or of unexpected developments that don’t fit an expected wear profile, and may indicate an emerging problem.

Of course, cars do this to some extent today, but as cars become smarter, we should use their greater processing abilities, and their inhuman attention to detail to perform such checks many times per second.

AVs should also evaluate how well they are driving, and all such data should be required to be pooled, and available to all AV manufacturers, so that AV software can be continuously upgraded.

In short, AVs should always be improving in every aspect of operation. This, too, should be required of all AVs before they are approved to operate on the same highways as humans.

So, bottom line: Are self-driving cars safer than human drivers? Should they be allowed on the roads, or banned as being dangerous?

The answer today is that they can be safer than humans, but aren’t always. They will become much, much better over time, but in the meantime, they should be required to meet independent standards of safety and performance before they are allowed to drive on their own. And I would reverse the permitted operation of L3 vehicles: they should be driven by humans assisted by smart computers, rather than driven by computers with human supervision.

And once fully autonomous vehicles become widespread, there will be several, potentially enormous benefits, about which much has been written.

There will also be enormous social, economic, and personal costs, which have received much less attention, but I’ll write about those in a later blog.

© Copyright, IF Research, April 2018.



Comments on this entry are closed.