THROWING THE BABY OUT WITH THE BATHWATER – THE CASE OF CRIMINAL RESPONSIBILITY AND DRIVERLESS CARS

The development and utilization of driverless cars are inevitable and closer to reality than ever. 11 states in the U.S. have already passed legislation related to driverless cars, a quick tour to California will enable you to see Waymo’s (Google’s driverless car project), being tested on public roads. Tech giants such as Google, Facebook, Uber, and Tesla consider it to be the future of the automobile industry. Elon Musk, the CEO of Tesla recently boldly predicted that within 10 years, all new cars will be self-driven. Whether accurate, it is safe to say that with that kind of “gun power” and investment, we will be passengers sitting in a driver(less) seat sooner rather than later. And yet, challenging issues relating to ethics, regulation, and policy remain to be answered before making these artificial chuffers an integral part of our lives.

            From a technical standpoint, driverless cars are vehicles operated by sophisticated algorithms that rely on inputs collected by sensors embedded into them. These sensors are the ‘eyes’ and ‘ears’ of the car, they are able to detect movement, shapes, and colors, calculate distance and trajectories as well as to orient the car with respect to other objects. Other embedded software (for the purposes of this post called ‘scene analysis algorithms’), similar to our brain, receive the above inputs and make (artificial) sense out of them e.g. a certain shape, X feet ahead of the car, moving in Y direction, at Z speed, is in G percent probability to collide with the car. A different piece of software (for the purposes of this post called ‘decision making algorithms’), will compute this information, ‘decide’ which action is the most appropriate in this situation, and then send ‘orders’ to the wheel, engine, braking system etc., to execute. It is important to understand that most of the above algorithms are non-binary i.e. they are not programmed to compute simple rules of causation such as ‘if A happens do B’. Rather, they are programmed to predict the probability of a certain event to occur, according to the data collected by their sensors, compared to other scenarios they were introduced to before in their programming phase. Hence, while the programmer gave the learning algorithm a dataset of labeled scenarios, accompanied by a certain weighing mechanism that enables the algorithm to maximize certain goals (such as staying on the asphalt and in lane) and minimizing others (such as hitting objects), the programmer would not be able to predict the algorithm’s specific decision in each scenario.

            Given the above analysis, and assuming that driverless cars will inevitably make mistakes, the ethical and legal dilemmas crystalize. Unless programmed with immoral or illegal minimization and maximization goals, or not properly audited to comply with such goals, inflicting any meaningful criminal intent (special intent, recklessness or negligence) to the programmers, manufacturers etc., seems difficult to the very least. Lest we accept that the mere inevitability of mistakes, which is a reality with driverless cars as well as with human drivers, should result in strict criminal liability. Wouldn’t that amount to charging any driving instructor with any illegal actions committed by their students anytime the latter drive? Although this is a legal question, it is interwoven with moral policy determinations.

            Alongside the obvious commercial benefits to companies resulting from selling their new high-tech cars, society will also tremendously benefit from their use. In a nutshell, fewer accidents, fewer casualties, fewer traffic jams, less pollution, cheaper insurance and greater mobility for elderly and children. These merits should be considered in deciding whether to apply criminal responsibility to humans for machines’ unforeseen and un-faulted actions. Assuming these anomalies will be confined to a minimum, there is no reason to force criminal liability when such does not exist. Instead, these situations should be governed by a consequentialist analysis of tort law, namely, by compensating the injured in such remote and unlikely situations from a trust, funded by both the creators of driverless cars and the public. In other words, the existing form of criminal law suffices to capture culpable actions of the creators of driverless cars. In all other circumstances, where the machine is referred to as an ‘intervening factor’, detaching any attribution to a person, criminal liability should not apply simply because there was no one to blame. The same logic applies to malfunctions of other, less sophisticated, machines that can cause injuries or death. In these situations, tort law should be adapted to accommodate the compensation of the injured. The balance between the compensation provided by the cooperations and the public should be stricken in a way that will incentivize cooperations to constantly better their products.

Finally, it shall be noted that this conclusion does not apply to other autonomous systems such as those devised to operate in war. Autonomous entities used in warfare are primarily designed to kill, rather than to drive safely from point A to B. As such, their programming intrinsically bears higher stakes, different moral challenges and greater destructive consequences that should be analyzed accordingly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s