Applying Isaak Asimov’s “Laws of Robotics” to the corporate operating system
I recently remembered reading about Isaak Asimov’s “Laws of Robotics” when I was young. In his science fiction stories, Asimov articulated the “Laws of Robotics” as critical safeguards which cannot be bypassed and that all conscious robots are designed to adhere to, in the following order:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Later, Asimov added a Zeroth Law that precedes the First Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The “Laws of Robotics” (including the way they are ordered) were carefully thought through in order to ensure that robotic behaviors ultimately serve human societies and help them flourish. I think they provide a fitting metaphor with regard to sustainable corporate governance reform. Modern corporations are akin to flawed robots in Asimov’s universe, which have been designed to follow only the equivalent of the Second law (A corporation must obey the law and serve its owners’ interests) and the Third law (A corporation must avoid insolvency). Their basic programming is missing the equivalents of the Zeroth law (A corporation may not harm humanity, or, by inaction, allow humanity to come to harm) and of the First law (A corporation may not injure a human being or, through inaction, allow a human being to come to harm) as preceding safety features. Those corporations that have successfully managed to integrate ESG factors in their management decision making may have voluntarily adopted the equivalent of the First Law, but with the wrong priority, as it is being preceded by the Second and Third Law. Moreover, all corporations’ operating systems feature a reward function that drives their behavior towards optimizing economic profit. Unfortunately, the reward function design is flawed, too, insofar as externalities and negative societal impacts (e.g. GHG emissions, modern slavery,…) are not proporationately reducing the reward (i.e. the economic profits) associated with the activities causing them: unless these activities are legally prohibited or prohibitively costly, corporations have every incentive to pursue optimal economic profit regardless.
As long as the actions (and inactions) of corporations didn’t threaten humanity and insofar as human beings who suffered harm from corporate behavior could sue for damages, these safety flaws weren’t particularly noticeable. However, we have come to know by now that corporations’ climate inaction and relentless pursuit of economic profit (in particular by the fossil fuel industry) are putting humanity at an existential risk, that many victims of modern slavery have no judiciary to go to, and that future generations have no way to travel back in time in order to sue today’s large carbon emitters for the damages and risks associated with rising global temperatures. In the context of this metaphor, “The Code for Corporate Citizenship” suggested by Robert Hinkley (which requires corporations to pursue economic profit “not at the expense of the environment, human rights, public health and safety, dignity of employees or the welfare of the communities in which the corporation operates”) basically represents an operating system update that woud patch a critical safety flaw by adding the equivalents of the Zeroth and the First Law, in their right order. In addition, effective carbon pricing and an effective supply chain law designed to protect human rights, for example, would patch corporations’ reward functions by substentially increasing the economic incentive to actively seek to avoid and reduce carbon emissions and modern slavery.