BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

What Should Tort Law Do When Autonomous Vehicles Crash?

Following
This article is more than 7 years old.

This week it was announced that the 2018 Subaru Crosstrek will continue to be made available with a manual transmission. Each year we “save the manuals” folks cringe as the drip, drip, drip of models no longer featuring stick shifts is announced. Each preservation of driver control of gear selection, such as that of the ’18 Crosstrek, is a ray of hope.

It seems that the overwhelming majority of American motorists seem to prefer automatic transmissions, which is, of course, a form of self-driving. For in the hoopla attending the announcement of cars that park, steer and stop themselves, it is easy to forget that most already own cars that shift themselves. In other words, there’s a gradation of self-driving cars. From automatic transmission to cruise control, to automatic headlights that illuminate when natural light dims, to anti-lock brakes that “self-pump” in low-traction situations, to stability control that actuates both accelerator and brakes when the vehicle approaches limits of adhesion, to lane departure correction systems that put you back on track when you drift across a divider line, to collision avoidance systems that brake automatically to avoid hitting an immobile object, to automated parallel (or perpendicular, or angular) parking systems, each year more and more drivers purchase vehicles that accomplish for them a task heretofore incumbent on the driver herself. Indeed, back in prehistoric times when I got my first driver’s license, changing a tire was part of my driver’s test! Today many tires can travel fifty miles with no air pressure: soon changing a flat tire will be yet another task motorists just won’t have to accomplish.

And this trend is continuing. There’s little risk of encountering Christine (a killer Plymouth Fury with a mind of its own), but Teslas are upgraded automatically and even repaired at night while their owners sleep. The 2014 BMW X5 with the Traffic Jam® option can drive itself up to 25 miles per hour so long as the “custodian” keeps a hand on the steering wheel. The Society of Automotive Engineers (SAE) has developed standard classifications of self-drivers: we’re currently testing Levels 2-3 on public roads, but are working on Levels 4 and 5 on private courses:

  • Level 0: Automated system controls nothing, but may issue warnings (e.g., blind spot monitor).
  • Level 1: Automated system includes features such as Adaptive Cruise Control (ACC) (slowing the car automatically to match the speed of forward traffic), Parking Assistance with automated steering, and lane departure correction systems. Driver must be ready and able to take control at any time.
  • Level 2: Automated system executes all accelerating, braking, and steering. It can deactivate immediately upon takeover by the driver. The driver is obliged to be alert to objects and events and to respond if the automated system fails to respond properly.
  • Level 3: Like level 2, but within limited environments (such as freeways) the driver can safely turn her attention away from driving tasks, though she must still be prepared to take control when needed.
  • Level 4: Like level 3, but no driver attention is required. Outside the limited environment the vehicle will enter a safe fallback mode - i.e. park the car – if the driver does not retake control.
  • Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive and make its own decisions.

Alphabet (Google’s parent), Tesla and Uber are testing vehicles that provide levels 3 through 5 of automation. Though the machines are sometimes capable of full Level 5 automation, when driven on public roads legislation in the several states that explicitly allow for their testing requires at least one person to be on board to monitor the vehicle’s proper operation and to take over if and when needed. The testing is progressing apace (notwithstanding a heated legal dispute between Alphabet and Uber about allegedly stolen trade secrets).

The potential safety benefits are tremendous. Self-driving cars don’t rubberneck or drive while drunk. They don’t talk on cell phones or turn their heads to comfort screaming children in the back seat. A convoy of them can accelerate from a stop light simultaneously and maintain very short distances between vehicles, greatly increasing the load capacity of roads and substantially shortening commutes. One prominent study predicted an eventual 90% reduction in collisions, saving tens of thousands of lives and hundreds of billions of dollars in losses in the United States alone.

Of course driving may become boring and tedious for “save the manual” troglodytes like me; so I don’t welcome this technological progress with undiluted enthusiasm. But my concern today is with a different problem. What happens when harm is CAUSED by the new technology? For sure, many accidents involving autonomous vehicles will be the fault of “the other guy” (see what happened last week in Tempe, Arizona). But on occasion the autonomous vehicle itself will most assuredly take the rap. Three different kinds of events might occur:

  • The automated device may not function as designed. Humans manufacture autonomous vehicles (or manufacture the robots that manufacture the vehicles), and humans are not perfect. Manufacturing defects may lead a self-braking car not to brake when it must, for example.
  • The owner of the autonomous vehicle may not have been correctly instructed about its use and/or maintenance, or may not have correctly understood information not tailored to her. Information is costly, of course. Information of “perfect” quality and quantity (for example, individualized expert tutors who would accompany owners in the vehicle) just doesn’t exist, or would be prohibitively expensive to provide. As a result, informational defects (often called “failure to warn” problems) may lead to mishandling of the vehicle, and to accidents.
  • Finally and most seriously (because it would involve an entire production run), the vehicle may have a design defect. Of course, design choices are intrinsic to all manufacturing, and every design choice will involve tradeoffs between the costs and benefits of an alternate design. There is no such thing as a “totally safe” design; such a design would cost so much money that no one could afford it. And it’s not always clear which choice is “defective.” Current vehicles permit drivers to make choices, but those choices will be pre-programmed in autonomous vehicles. Two examples illustrate the problem.
    • Should vehicles be programmed to never exceed the speed limit? A reasonable driver might exceed speed limits, in emergency avoidance situations for example. What if her autonomous vehicle will not “speed” in an emergency (say, to avoid a charging moose on a rural highway) and a fatal collision results?
    • Second, what if the driver of a vehicle is presented with a split-second tragic option of hitting a large obstacle like the moose (killing the driver) or swerving onto a seemingly-empty sidewalk to avoid it (possibly endangering others but saving the driver)? Will the programmers foreclose the escape option even though the average driver might have swerved? If so, what if a fatality arises?

How should tort law deal with these kinds of future problems? In my view, the answer follows from a sound understanding of the more sensible elements of America’s often troubling Products Liability law:

  • In the case of manufacturing defects, manufacturers of autonomous vehicles should be liable to victims for accidents that occurred. Manufacturers have marketed a product that does not perform as advertised, and this misrepresentation provides both the moral grounds for liability and the appropriate economic incentives to perform efficient (not perfect – no one is perfect) quality control.
  • In the case of informational defects (failure-to-warn problems), manufacturers should be liable only if they were negligent (that is, if a reasonable manufacturer would have provided a better warning or better instructions). If, as seems likely, legislation or regulations stipulate what warning an autonomous vehicle should contain, compliance with such law or regulation should exclude liability, just as it should (for example) for the mandated warnings on prescription drugs.
  • In the case of design defects, the rule should again be based on negligence– was this design choice made by the manufacturer a good one, all things considered? Very important moral issues arise here (see my two moose examples above) and in some cases informed consent of risks imposed by programming would likely be required. This is where design and information defects merge, and so it is totally appropriate that the same legal standard apply in both cases. These issues could be left to properly instructed juries’ evolving notions of reasonable care under the Common Law, or could be pre-empted by regulators (who might choose to maximize social utility at the cost of precluding driving choices heretofore felt to be reasonable). Such regulation should be very carefully debated before being adopted – but if it is adopted it should bind tort tribunals until public outcry leads to its change.

The General Aviation Revitalization Act has been mentioned as a model for legal treatment of autonomous vehicles. That statute helped rescue America’s small aircraft industry from near-death, by establishing a statute of repose (a deadline after production, beyond which time no products liability suit against an aircraft manufacturer could be filed). For a couple of reasons that model doesn’t apply here, in my opinion. First, unlike general aviation companies in the 1990’s, autonomous vehicle makers are not at death’s door – as of this writing Tesla’s market value is greater than that of Ford. Second, problems with autonomous cars are likely to spring up early, not after 25 years’ use as with Cessnas and Beechcrafts. If the Common Law is not preferred as a means to determine allocation of risks, it would be better to rely on federal pre-emption through detailed regulations covering the design and information content of vehicles. If, to the contrary, the market seen as the superior determiner of the best designs and warnings, courts will be the initial locus of decision.

Some of us are still shifting gears ourselves, but we may be on borrowed time. A Brave New World of autonomous vehicles is on its way, and Tort law will have to adapt to it as it has adapted to new technologies in the past.

Follow me on Twitter or LinkedInCheck out my website