Applying the Sudden Emergency Doctrine to Autonomous Vehicles: The Case of the Speeding Truck and the Polar Bear

26 Pages Posted: 14 Jul 2016

See all articles by Katherine Sheriff

Katherine Sheriff

Emory University, School of Law, Students

Date Written: May 19, 2016

Abstract

In this hypothetical accident, the driverless car sees a “polar bear” – the “polar bear” in the real instance is that speeding truck traveling through a red light that the driverless car ought to have sensed and avoided – and does not know how to handle the particular input – which has not been previously programmed. Within the traditional tort paradigm, liability extends to those foreseeable outcomes from tortious behavior or operation. However, liability for secondary outcomes arises in those situations where the vehicle ought to have sensed the data in the outlying immediate area, but failed to do so (i.e., no matter how fast the car swerved, absent any additional negligence, some negligent act put into motion the entire sequence).

In such situations, the driverless car programmer would be responsible for secondary outcomes not foreseen or realized prior to release of the product into society. In such situations, the standard for foreseeability would no longer appropriately be limited to the “reasonable person” for the simple reason that the driverless car is not operated by the “reasonable person” but by the “super-person.” Given the realities of human limitation, driverless cars are marketed to pick up the slack where humans err. In such a case, the standard of foreseeability is not that which the reasonable driver or person could fathom, but that which the programmer of the reasonable super-person or super-driver would have determined through the numerous iterations necessary prior to releasing the autonomous vehicle to market.

The sudden emergency doctrine (SED) operates in tort law to relieve from liability actors who the law determines are incapable of making a decision at the time of some otherwise negligent act. The crux of the doctrine is that liability is only quashed in unique situations, like lightning striking, typhoons shutting down traffic signals, and a platypus crossing the road. Other examples include a terrorist attack or a passenger experiencing a cardiac arrest.

To date, humans lack the capacity to deliberate or process information in extreme circumstances – again, like lightning striking the road ahead – to the extent necessary for rational decision making by human actors. However, unlike human drivers, autonomous vehicles (AVs), or driverless cars, theoretically have the capacity to anticipate all known obstacles that have been pre-programmed and then make decisions based on this programming, even during emergencies. In the case of driverless cars, the decisions programmed generally anticipate all known obstacles.

Still, in the narrow set of emergency circumstances proposed in the Introduction, it is conceivable that driverless cars might lack pre-programmed directions to act. After experiencing lightning strikes, massive system failures, typhoons, or speeding platypuses crossing the road, the question begged is whether the driverless car’s action would receive the same treatment by courts as would a human actor under SED. This paper endeavors to answer that question and then to explain why SED ought to apply to AVs as a policy matter.

Keywords: autonomous vehicles, tort liability, machine learning

Suggested Citation

Sheriff, Katherine, Applying the Sudden Emergency Doctrine to Autonomous Vehicles: The Case of the Speeding Truck and the Polar Bear (May 19, 2016). Available at SSRN: https://ssrn.com/abstract=2807597 or http://dx.doi.org/10.2139/ssrn.2807597

Katherine Sheriff (Contact Author)

Emory University, School of Law, Students ( email )

Atlanta, GA
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
119
Abstract Views
848
Rank
422,206
PlumX Metrics