The Ethics of Driverless Cars
The advent of driverless cars is upon us.
The automated/driverless car has been the plaything of science fiction writers for decades. The 2002 blockbuster Minority Report featured driverless cars on what were, essentially, giant Scalextric tracks.
Even 2004’s iRobot and 2012’s Total Recall remake featured vehicles that could operate autonomously until the user chose to intervene. With dramatic, explosion-filled results.
Back in the realms of reality, companies like Google and Apple are making the driverless car a reality. Apple are somewhat behind their rival with their iCar not expected until 2021 but with an R&D budget of $10 billion we can expect it to be pretty special.
Elon Musk’s Tesla Motors has been the first company to properly take a swing at car automation with its Autopilot feature.
The wealth of ultrasonic sensors, cameras and front mounted radar array (no really), allow the vehicle to take over much of the driving, providing the conditions meet the software’s parameters.
There will undoubtedly come a time when the visions of the future become a reality and we travel in automated cocoons, allowing us time to converse with family and friends or, more likely, check Facebook.
Until then driverless cars will enter a world where the computer has to compete with the unpredictable nature of humans.
Bad drivers, tired drivers, inexperienced drivers, drunk drivers, pedestrians and cyclists are all hazards that any automated car has to take into account, on top of the task of conveyancing its passengers from point A to point B safely.
Sooner or later the driverless car will be confronted with an ethical choice that could kill its passengers.
The renowned science fiction author Isaac Asimov famously devised the three laws of robotics in the short story Runaround written in 1942. Whilst Asimov was fascinated by artificial life, he could foresee the disaster that would befall mankind should robotic life turn on its makers.
The three laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Whilst modern day robots aren’t programmed with these three laws, their creators have evidently programmed the machines with those laws in mind. This is little wonder as the robots in Asmiov’s fiction were self-aware whereas Boston Dynamic’s Atlas robot most certainly isn’t.
Whether driverless cars are governed by Asimov’s laws or not, they will undoubtedly be presented with a situation where it has to calculate a response that could conceivably kill its passengers.
It’s an interesting –albeit macabre – thought: under what circumstances will your car kill you?
Machine intelligence is programmed to calculate the variables and choose the ‘lesser of two evils’. A driverless car will determine that driving into a wall at speed, killing its two passengers, rather than mow down five pedestrians who have wandered into the road is the better choice.
The owner of the speeding death machine may feel differently.
Perhaps the issue isn’t that driverless technology has taken away our safety as all the evidence suggests that they won’t be prone to erratic lane changes, speeding and all the others things we shouldn’t do…but do.
The issue then is that driverless cars take away our right to make the moral choice for ourselves. Or, arguably, to improvise a solution in a way that a driverless car wouldn’t or couldn’t.
Under the cold eye of a machine, morality is broken down into numbers. But humans may assess that the two lives in the car are worth more on the basis that one of them is an infant or child.
It may be that driverless cars will integrate traffic camera technology to pre-empt emerging hazards and slow to a safe speed before the pedestrians step out. But as it stands, Asimov would be horrified.
Driverless cars aren’t yet legal and in the absence of human intervention there are a lot of hypotheticals that need to be addressed before they roll off the production line in any real volume.
Even then will we ever live in a world where we’re willing to go entirely hands free when it comes to our cars?