GUEST BLOG: Hero Takes a Fall by Aegis Frumento Esq

June 13, 2019

Hero Takes a Fall

by Aegis J. Frumento, Partner, Stern Tannenbaum & Bell

One of the more interesting questions raised by our increasing reliance on artificial intelligence is who to blame when things go wrong. In past columns, I have suggested that the question makes no sense when no human is involved in the decision-making. The better analogy is when we blame God for hurricanes and floods. We deal with those so-called "Acts of God" not by suing God (delicious an idea though that might be), but by either sucking it up or spreading the risk. We do the latter by buying a policy or by imposing legal liability on another party who can better insure against or raise prices to cover the risk.

A soon-to-be published study notes a third alternative when AI systems go bad. Madeleine Clare Elish writes in Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction a tendency to blame a human -- any human -- who happens to be close by when AI decisions go wrong. Studying recent failures in automated systems, such as the Three Mile Island nuclear accident and the crash of Air France 447, Elish notes that although automated system failures caused the accidents, the media and public were quick to misattribute blame to the humans who tried to prevent them. There is a "contradictory dynamics in which automation is seen as safer and superior in most instances, unless something goes wrong, at which point humans are regarded as safer and superior." Then, humans become the "moral crumple zone." Like the car frame that absorbs the shock of a crash to protect the human, the human bystander to an automated system absorbs the moral blame that we can't impose on the machine.

This is understandable. Modern studies in behavioral economics have shown that narratives are cognitively easy to grasp and remember, while data and logic -- that is, analysis -- takes a lot more mental energy to process. Since we are all basically lazy, a good narrative will always grab us. And every good story needs a hero and a villain. When dealing in the moral crumple zone, the two tend to be the same.

Elish shows how the accused villains of Three Mile Island were the very plant operators who, trying desperately to prevent a meltdown, were blamed for making things worse. Elish quotes one report at the time, "If the operators had not intervened in that accident at Three Mile Island and shut off the pumps, the plant would have saved itself. They [the designers] had thought of absolutely everything except what would happen if the operators intervened anyway." Similarly, when Air France 447 gave conflicting signals about the aircraft's flight attitude, the pilots could not figure out what was going on as they frantically tried to right the plane. The investigators ultimately concluded that a mechanical malfunction "had set of a chain of events that caused the accident, although it was a series of responses by the crew that ultimately resulted in the crash."

The same thing now seems to be happening with self-driving cars, where any "safety driver" required to be onboard will likely be the fall guy for any accident the car happens to get itself into. See of this sounds very fair, and in thinking about it I was drawn back to that first-year torts-class classic, Palsgraf v. Long Island Railroad. For those of you who've encountered this gem but need a refresher, and for those who've never heard of it, check out this animated reconstruction of what happened:

A passenger carrying a small nondescript package tries to jump on to a moving train. Two railroad employees, one on the train and the other on the platform, pull and push the passenger on board. In the process, he drops the package on the tracks. As the train rolls over it, it explodes; unknown to anyone but the hapless passenger, it contained fireworks. The blast rattles the platform, causing a set of scales to topple over on to Mrs. Palsgraf, minding her own business some 25 to 30 feet away. She sues the railroad, and gives legendary New York State Judges Benjamin Cardozo and William S. Andrews a chance to wax eloquent on who should get blamed for what.

Judge Cardozo wrote the majority opinion that gets passed down to law students. Negligence presupposes a duty to the injured person. Negligence can't just be "in the air." According to Cardozo, the duty is owed to those who the actor can reasonably foresee might be injured by their actions. The railroad employees pushing and pulling the passenger with the hidden fireworks could not foresee an explosion, and even less that anything they did would impact Mrs. Palsgraf standing 30 feet away. She's not a foreseeable plaintiff, and so she loses.

And yet, rereading Palsgraf after several decades and with AI liability on my mind, I can't help thinking that Judge Andrews has the better of it for our age. Andrews says foreseeability of injury is not the point. There is "a duty imposed on each one of us to protect society from unnecessary danger, not to protect A, B or C alone." If an act causes injury, "it harms him a mile away as surely as it does those on the scene." Instead, Andrews asks what was the "proximate cause" of the injury. In this, Andrews becomes very philosophical, anticipating the "butterfly theory" by decades:

"A boy throws a stone into a pond. The ripples spread. The water level rises. The history of that pond is altered to all eternity. It will be altered by other causes also. Yet it will be forever the resultant of all causes combined. Each one will have an influence. How great only omniscience can say."

Okay . . . . But only one will be the "proximate cause" that results in legal liability. Which one? Andrews is quite candid about it: "What we do mean by the word ‘proximate' is, that because of convenience, of public policy, of a rough sense of justice, the law arbitrarily declines to trace a series of events beyond a certain point. This is not logic. It is practical politics." As Oliver Wendell Holmes noted, the life of the law is not logic, but experience. "It is all a question of expediency. There are no fixed rules to govern our judgment. . . . There is in truth little to guide us other than common sense."

This refreshing example of judicial frankness seems to describe best the current state of AI liability. But our experience with AI is too new and so counterintuitive that it plays havoc with our common sense. We feel that need to hold some human responsible for things that go wrong. We can't quite yet live with the possibility that the machine just fucked up. But who to sue can be elusive.

For example, Bloomberg reported that an investor, Samathur Li Kin-kan, lost $20 million because a fully-automated trading system sold out a position on a stop-loss order. A supercomputer named K1 monitors online news to fathom investor "sentiment" and then predict which way the markets would go, "learning" as it goes. On this occasion, it predicted wrong.

Li sued the company, Tyndaris Investments, that sponsored the fund the machine managed. Tyndaris did not program the machine, did not supervise its operation, did not have any connection with its development. Li was at first enthused, saying in an email that AI-enabled trading "is exactly my kind of thing." But that was before he lost $20 million. Now Li claims Tyndaris misrepresented the AI system's "sophistication." 

Maybe. Still, Tyndaris had nothing to do with the loss itself. Its main problem is that it was standing too close -- it is too conveniently within the moral crumple zone. So now, the hero that brought Li his fantasy AI trader may have to take the fall for that trader's fuck-up. We'll be seeing more of this.


Aegis J. Frumento
Stern Tannenbaum & Bell
Co-Head, Financial Markets Practice

380 Lexington Avenue
New York, NY 10168

Aegis Frumento is a partner of Stern Tannenbaum & Bell, and co-heads the firm's Financial Markets Practice. Mr. Frumento represents persons and businesses in all aspects of commercial, corporate and securities matters and dispute resolution (including trials and arbitrations); SEC and FINRA regulated firms and persons on regulatory compliance issues and in SEC and FINRA enforcement investigations and proceedings; and senior executives of public corporations personal securities law and corporate governance matters.  Mr. Frumento also represents clients in forming and registering broker-dealers and registered investment advisers, in developing compliance policies, procedures and controls, and in adopting proper disclosure documents. Those now include industry professionals looking to adapt blockchain technologies to finance and financial market enterprises.

Prior to joining the firm, Mr. Frumento was a managing director of Citigroup and Morgan Stanley, a partner and the head of the financial markets group of Duane Morris LLP, and the managing partner of Singer Frumento LLP.

He graduated from Harvard College in 1976 and New York University School of Law in 1979. Mr. Frumento is a frequent author and speaker on securities law issues, and is often quoted in the media on current securities law developments.

NOTE: The views expressed in this Guest Blog are those of the author and do not necessarily reflect those of Blog.

Securities Industry Commentator
A legal, regulatory, and compliance feed
curated by veteran Wall Street lawyer Bill Singer

Seniors were sold a risk-free retirement with reverse mortgages. Now they face foreclosure.
(USA Today)

GUEST BLOG: Hero Takes a Fall by Aegis Frumento Esq ( Blog)

Sham Investment Advisor Sentenced to 10 Years in Prison / Rick Guyon Defrauded Investors of $1.9 Million (DOJ Release)

Ex-Financial Advisor with History of Professional Misconduct Arrested on Fraud Charges Alleging $14.5 Million Real Estate Scheme (DOJ Release)

Cambria County Investment Advisor Sentenced to 6+ Years in Prison for $4.5M Fraud Scheme (DOJ Release)

Long Island Attorney Pleads Guilty to Conspiracy to Obstruct Federal Proceeding / Defendant Schemed to Help Co-conspirator Avoid Paying Millions of Dollars in Court-Ordered Restitution (DOJ Release)

Union County, New Jersey, Man Sentenced To 25 Months In Prison In Scheme To Manipulate Microcap Stock By Touting A ‘Wellness Social Community For People And Their Pets' (DOJ Release)

In the Matter of Christopher M. Gibson (SEC Order)

FINRA Suspends Spencer Edwards Inc Ability to Accept Certificates and Liquidate Shares. In the Matter of Spencer Edwards, Inc., Respondent (FINRA AWC)

Talking About Meaningless Gestures (Part II). In the Matter of Donna Flemming, Respondent (FINRA AWC)