Keep Up

8: A discussion on automated driving

What autonomous vehicles means for us.

The benefits of autonomous vehicles are certainly clear – time saved, increased productivity, improved safety, continuous service availability – but there are also numerous challenges. Among them are numerous ethical and legal issues that will need to be addressed in order for this technology to become widespread. This article discusses these issues from both an engineering perspective as well as a philosophical one. It begins with a brief history of the development of self-driving cars, and then goes into more detail about the various possible ethical implications of their use. Finally, it concludes with some practical suggestions for how we can best deal with these issues in our society.

A Brief History of Autonomous Vehicles

Automated driving has been around since at least the 1970s, when the first commercially available car was driven entirely by computer programs. Since then, progress has continued to be made, with a number of automated features being added to cars throughout the 1990s and 2000s. At present, fully autonomous cars are allowed on roads in only a few states, with other states moving towards allowing such vehicles by 2030. In any case, it is expected that self-driving cars will become ubiquitous on American roads within our lifetimes.

Driverless vehicles are appealing for a number of reasons. They can operate more efficiently than human drivers, as they don’t get distracted or tired, and their reaction times are faster than those of humans. This means that they would be able to drive much closer to one another, potentially leading to less traffic and easier passage through it. They also would be able to travel along routes that might be unavailable to humans, such as a gravel road through the woods, leading to other inaccessible places. This could help emergency and delivery services reach people in need more quickly.

Today’s driverless cars have many other advantages over their predecessors: they’re safer (they crash less), more reliable (the software doesn’t break down) and cheaper (you don’t pay for repairs).

In addition, the technology behind self-driving cars is rapidly advancing and is projected to become much more affordable in the near future. Given these advantages, it seems almost certain that fully autonomous vehicles are the future of personal transportation.

Current Issues With Autonomous Vehicles

There are many issues that will need to be addressed before self-driving vehicles can become a reality in the legal, social and business realms. These include:

Insurance: Nobody has yet determined who would be responsible in the event of an accident. If there is an accident that involves two or more autonomous vehicles, who would be found legally liable? What if the accident involved a self-driving car and a non-autonomous vehicle? How would you assign liability in that instance? Would it be the owner of the vehicle, the company that made the vehicle, the designer of the software, the manufacturer of the components, or some other party? There are numerous other unanswered questions about liability as well.

Public Perception: There is a lot of distrust of self-driving vehicles from the general public. A recent poll found that 69% of respondents were “not confident at all” that self-driving cars would “protect their safety”. This is probably because most people don’t understand the technology or trust it, as 73% of respondents believed that software was too complex to ever be completely error-free.

Laws: We currently have laws against reckless driving, speeding, failure to stop, excessive blood alcohol content and a variety of other actions. But how do you police autonomous vehicles? There are several laws that we take for granted that simply won’t apply to self-driving cars. For example, a cop can pull you over if he spots you talking on your cell phone without a hands-free device. How would he stop an autonomous vehicle? Additionally, there will be all kinds of new laws needed to address problems that come up with an increased number of autonomous vehicles, such as how the owner of an autonomous vehicle is still responsible for any damage it causes.

Insurance and laws are really social issues that need to be addressed, while safety is a legal issue that needs to be dealt with.

Safety and Security: It’s absolutely critical that self-driving cars be 100% safe. One accident could set the entire industry back years and would almost certainly guarantee a ban on self-driving cars from governments around the world. While 99.99% safety is certainly achievable through rigorous testing, it’s the .01% chance of a fatal error that will determine if this technology ever takes off.

Self-driving cars need to be as safe (if not safer) than any other car on the road. This means that software programmers need to anticipate everything that could go wrong and code solutions. This can be particularly difficult as there are so many different operating conditions for self-driving cars and the potential situations they may encounter are endless.

The other issue is security. Since self-driving vehicles use a lot of complex software, they will be susceptible to hacking and security breaches. This means that automakers will need to implement top notch security measures in their vehicles and thoroughly test them to ensure that they can’t be hacked. Additionally, automakers may be held liable for any damage a vehicle causes if it is ever hacked.

Price: The cost of self-driving vehicles is certainly going to be a barrier to entry for the general public. The lower the price the more people will buy them, but this means that the automaker’s bottom line takes a big hit.

Ethical Considerations on Autonomous Vehicles

Just as there are many social issues regarding autonomous vehicles, there are a lot of ethical issues too. The problem with automating vehicles is that it can allow people to do things that they otherwise wouldn’t do if they had to manually operate the car. This opens the door to several ethical dilemmas.

This is already a problem with features that have already been implemented in cars such as cruise control and parking assist. It seems that every year there is a story in the news about a driver who was texting or talking on the phone while the vehicle was on cruise control. While most people understand that this isn’t safe or legal, there will always be those people who feel that it’s perfectly fine to engage in this type of activity as long as the car is handling the driving.

There are other times when automated features can lead to accidents. A couple years ago there was a story about a man who accidentally killed his toddler when he was parking his car. The vehicle wasn’t actually in reverse, but instead in neutral. The parking assist feature then slowly started to roll the car backwards and the father didn’t notice in time before he hit another vehicle and then a tree.

That being said, the most prominent ethical issue that will need to be solved is how autonomous vehicles judge the relative value of human lives. If, at a given moment in time, all future states lead to an accident that harms one or more human beings, how should an autonomous being decide between which human being(s) to harm? It’s an inevitable consequence of these vehicles. It’s impossible to build a car that will perfectly balance the safety of all passengers at all times. This is going to be even more complicated if there are multiple autonomous cars on the road.

In addition, there are so many potential situations where an autonomous vehicle will need to decide which living being to sacrifice.

Should an autonomous vehicle swerve and hit a police officer to avoid hitting a crowd of school children?

Should it be programmed to always protect the driver over the passenger or everyone else?

At a 4-way stop, if one vehicle has to decide between hitting a mother with a stroller or a teenage couple, who should it choose?

What if none of the humans involved are wearing their seat belts? Will it take that feature into account when deciding what future action to take?

These questions (and others) are important to consider in the pursuit of true autonomous vehicles that we can trust to do the right thing.

In summary, autonomous vehicles are an exciting future, but they come with a lot of responsibility and danger. We need to get it right the first time because rolling out partially autonomous cars is just going to cause more problems than it solves. There will be times when no matter what the autonomous car does, it’s not going to be the right choice for everyone. If people don’t trust the vehicle and its judgment, there will inevitably be more crashes.

But I’m optimistic that engineers can solve these problems before these cars are given to the general public. I can’t wait for the day that we can all get in our cars, tell it where we want to go, and read a book while it drives us to our destination.