Columbia Researchers Studying How To Ensure Safety of Driverless Cars

By Sharon Di and Eric T. Talley

The first-of-its-kind traffic fatality in Tempe involving a self-driving car has elicited tremendous attention and alarm among public and policy makers, who are concerned about the interaction between autonomous vehicles with drivers, cyclists and pedestrians. We are likely months away from having the results of a detailed investigation into what went wrong and which factors contributed most directly to the accident. There are several possibilities, ranging from the human operator to prevailing conditions to the behavior of the pedestrian.

From a legal perspective, these factual challenges are all but inevitable: Under Arizona law (and that of many other states), legal liability for accidents between automobiles and pedestrians typically involves a complex calculus of “comparative fault” assessments for each of the aforementioned groups. The involvement of an autonomous vehicle can complicate matters further by adding other parties to the mix, such as the manufacturers of hardware and programmers of software. Insurance coverage distorts matters further by including third-party insurers to the mix.

Many legal experts anticipate that over time, products liability will come to play an increasingly important role in these cases, particularly as human operators or monitors are removed entirely from autonomous vehicles. While this transition may simplify the legal analysis in some ways (by removing one of the potentially contributing actors), it makes more pressing the need to apportion liability risk among accident victims and the businesses who manufacture, design and market autonomous vehicles. And here, multiple challenges ensue, not the least of which is understanding what constitutes “reasonable” behavior by injured parties, since it involves assessing the strategic interaction between humans and machines.

For example, suppose a pedestrian identified the car as self-driving, and on that basis assumed that it would come to a stop if she seized the right-of-way; what are her reasonable expectations that the algorithm could adapt, or that the technology could fail? This is a poorly understood aspect of human-machine interaction and adaptation that we should be serious about understanding.

In the transition period, when there is a mixture of self-driving cars and human-driven cars or pedestrians and cyclists, the challenges may be the greatest. Human beings will be exposed to more and more encounters with self-driving cars alongside traditional drivers, especially in dense urban areas. It is plausible that human actors could attempt to “outsmart” the technology by cutting in and seizing self-driving cars’ right-of-way, because they take it for granted that the technology will adapt with countermeasures. Should that be the case, human actors may start exercising fewer precautions than they should and become gradually negligent on the road. If a reasonable liability system is not in place, it can encourage such reckless behavior and cause more accidents than when there is no self-driving car.

On the other hand, unless a credible and workable product liability standard is imposed to self-driving technology, manufactures and software designers may be either too careless or too conservative in the driving algorithms they design and sell, decisions that have a critical impact on both traffic fatalities and traffic efficiency which we expect from new technology. The interplay of these considerations is complex, both computationally and conceptually.

The problem is a difficult one, and the ever-changing nature of technological change ups the ante even further.  Today’s self-driving cars can be clearly identified through a characteristic, 360-degree spinning LIDAR on the roof.  And in some cases, autonomous vehicles can be identified when there is nobody in the driver seat. As human beings develop more strategies to “play” with self-driving cars, however, will it still admit for such easy identification?

At Columbia University, a unique team of researchers, composed of experts in data science, transportation engineering, law, computational mathematics, and computer science, is trying to adapt previously unlinked models of traffic flow and accidents to understand how different liability regimes may influence human and manufacturer behavior. As autonomous vehicle technology grows, the team’s goal is to develop sophisticated modeling frameworks that can be calibrated with real-world traffic flow and accident data, so as to generate policy proposals that produce a desirable combination of safety and efficiency. One critical question in their research considers whether there is a plausible “tipping point” in the penetration of autonomous vehicle technology where legal and regulatory institutions and practices must quickly adapt.

The innovation of the self-driving car has been creating a lot of buzz for some time in both public, academic, and commercial domains. Indeed, we as a transportation community appreciate the benefits that this emergent technology is expected to bring to our day-to-day activities. A big question here, however, is are we ready as a society to accept such a transformative change? Addressing all the questions and concerns highlighted above are extremely critical and time sensitive, but necessary if we are to ensure the expected benefits associated with self-driving cars. The crash that killed this pedestrian is a testimony of that. Hence a comprehensive scientific research encompassing areas of traffic, legal and computation is warranted to ensure that our future with self-driving cars is safe and secured.

Sharon Di is a member of Data Science Institute and Assistant Professor in the Department of Civil Engineering and Mechanics at Columbia University. She specialises in travel behavior analysis and transportation network modeling. Her research applies optimization game theory, and data analytics to study transportation problems.

Eric T. Talley is a member of Data Science Institute and an expert in the intersection corporate law, governance and finance and teaches a variety of courses on Corporate Law, Mergers and Acquisitions, Contract/Commercial Law and others at Columbia Law School.

Data Science Institute

About garen