Translate

Monday 7 March 2016

Part 6 - Morality, I've heard of that.


Hi again, first off the rank this week is the first ever crash of an autonomous vehicle where it was the vehicles fault, sort of.

The Google Lexus was doing around 2 mph and the bus about 15 mph, hardly speeds to cause much drama but enough for some people to say that proves that this technology is far from ready for use on the roads. Given that Google AV.s have done more than one and a half million miles of  automated driving and this is the first time that they have been responsible for an accident, to me, would indicate that this is a very mature technology indeed. I would hazard a guess the that distance with human drivers would not be so accident free.

Just on that subject the actual incident itself is interesting.
According to google the car was in the far right side of the lane to allow vehicles to pass on the left within the same lane when it encountered sandbags that it had to go back to the centre of the same lane to pass. It assumed the bus would give way to it as it was in the same lane but the manually driven bus continued to pass and the car scraped its side.

Morally, the bus was probably in the wrong but the car should have waited, fickle things drivers.

I was going to explore where the cheaper cars are in autonomosity (new word attributable here?) but frankly its too hard to sort out at the moment as I don't have a lot of spare time so I am going off on a tangent.

One of my colleagues at work started talking about something that gets a lot of attention today and a lot of people consider this to be a key to the whole autonomous thingywhatsit.
That discussion centres around the "Moral Conundrum".

In a nutshell you have the scenario where your AV is driving along, a truck is coming in the opposite direction and a kid runs out in front of you. The moral conundrum is what does the car do, swerve in front of the truck and kill the autonomous passengers or hit the kid. This comes up time and time again with slightly differing scenarios but with the same underlying moral dilemma and seeks an answer.

Personally I think it's a crock because it only looks at a thought out moral choice, one which an AI can't make and one which a person in that situation can't think fast enough to make anyway.

Lets run through this given today's level of technology and perspective.

You are driving along, a truck is coming the other way and a kid runs out in front of you. What do you do?
A deep and meaningful intellectual discussion with yourself over the moral situation doesn't get a look in, not enough time as instinct kicks in and your brain sends a message to hit the brakes, hard. This action takes time and you are now closer to the kid and you just might have time to react further, or you may not.

This now comes down to 2 things.

1. How fast you, as a person is capable of thinking, and
2, How modern is your car. Do you have ABS, ESC, or any other safety features that would allow you to take evasive action while braking.
If you have an older vehicle you are already committed as it is now in the hands of fate if you hit the kid or the car slides in front of the truck. End of story.

If you have modern safety features and can think fast enough you may be able to swerve and the human reaction would be away from the immediate danger, the child, without thinking of the truck hurtling toward you. All of the actions so far have been instinctive, split second reactions pretty much without intellectual thought.

Lets jump forward to an automated road where your Johnny cab is driving, an automated delivery truck is coming toward you and there is a stream of automated vehicles behind you and the truck.

Your lidar picks up the kid running toward the road, long before a driver would have seen it, and calculates an intersection trajectory. Based on that it immediately initiates deceleration based on optimal calculation. Within microseconds it has alerted the truck to the situation and all the vehicles behind which start to brake in synchronisation. The truck braking alerts all the vehicles behind it as well. As the child was detected well in advance the chance of a collision has now been reduced to minimal and the vehicle can decide, based on the actions of the child whether to brake harder or resume speed. If the child continues then avoidance decisions can be made based on all the collision calculations. Warning alerts (horn, siren etc) can also be initiated automatically during this process.

Given these two scenarios one thing is abundantly clear, the chances of survival of all the participants is greatly enhanced by the level of automation, just as ABS has saved live, ESC has saved live, seat belts and airbags save lives. AI calculation and response is far faster than human reaction and can keep calculating during an emergency and has the advantages of not suffering brain freeze trying to sort out a moral dilemma and because of that will most certainly save many more lives.

As I have said before and will say again (mainly because I probably forgot that I said it before, but I digress) Automated vehicles will not stop all road deaths. There will always be situations that can't be foreseen, or human error, acts of nature and increasingly, deliberate human intervention through malfeasance (nice word) or terrorist activity.

However 400,000 people throughout the world die every year and millions more are injured in motor vehicles. Automation will stop more than 90% of that over time.

So the only true moral dilemma is how fast can we get there and save these lives?









No comments:

Post a Comment